00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 109 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3287 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.232 > git --version # 'git version 2.39.2' 00:00:00.232 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.959 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.972 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.984 Checking out Revision f7830e7c5d95762fb88ef73dac888ff5050122c9 (FETCH_HEAD) 00:00:06.984 > git config core.sparsecheckout # timeout=10 00:00:06.995 > git read-tree -mu HEAD # timeout=10 00:00:07.012 > git checkout -f f7830e7c5d95762fb88ef73dac888ff5050122c9 # timeout=5 00:00:07.031 Commit message: "doc: update AC01 PDU information" 00:00:07.031 > git rev-list --no-walk f7830e7c5d95762fb88ef73dac888ff5050122c9 # timeout=10 00:00:07.195 [Pipeline] Start of Pipeline 00:00:07.210 [Pipeline] library 00:00:07.212 Loading library shm_lib@master 00:00:07.212 Library shm_lib@master is cached. Copying from home. 00:00:07.230 [Pipeline] node 00:00:22.232 Still waiting to schedule task 00:00:22.233 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘GP4’ is offline 00:00:22.233 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.233 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.234 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:12:42.755 Running on GP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:42.756 [Pipeline] { 00:12:42.770 [Pipeline] catchError 00:12:42.773 [Pipeline] { 00:12:42.786 [Pipeline] wrap 00:12:42.795 [Pipeline] { 00:12:42.800 [Pipeline] stage 00:12:42.801 [Pipeline] { (Prologue) 00:12:42.991 [Pipeline] sh 00:12:43.373 + logger -p user.info -t JENKINS-CI 00:12:43.464 [Pipeline] echo 00:12:43.465 Node: GP12 00:12:43.474 [Pipeline] sh 00:12:43.903 [Pipeline] setCustomBuildProperty 00:12:43.916 [Pipeline] echo 00:12:43.917 Cleanup processes 00:12:43.921 [Pipeline] sh 00:12:44.220 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:44.220 2545710 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:44.235 [Pipeline] sh 00:12:44.521 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:44.521 ++ grep -v 'sudo pgrep' 00:12:44.521 ++ awk '{print $1}' 00:12:44.521 + sudo kill -9 00:12:44.521 + true 00:12:44.540 [Pipeline] cleanWs 00:12:44.551 [WS-CLEANUP] Deleting project workspace... 00:12:44.551 [WS-CLEANUP] Deferred wipeout is used... 00:12:44.558 [WS-CLEANUP] done 00:12:44.564 [Pipeline] setCustomBuildProperty 00:12:44.578 [Pipeline] sh 00:12:44.859 + sudo git config --global --replace-all safe.directory '*' 00:12:44.964 [Pipeline] httpRequest 00:12:44.993 [Pipeline] echo 00:12:44.994 Sorcerer 10.211.164.101 is alive 00:12:45.004 [Pipeline] httpRequest 00:12:45.009 HttpMethod: GET 00:12:45.010 URL: http://10.211.164.101/packages/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:12:45.011 Sending request to url: http://10.211.164.101/packages/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:12:45.014 Response Code: HTTP/1.1 200 OK 00:12:45.015 Success: Status code 200 is in the accepted range: 200,404 00:12:45.016 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:12:45.163 [Pipeline] sh 00:12:45.446 + tar --no-same-owner -xf jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:12:45.460 [Pipeline] httpRequest 00:12:45.474 [Pipeline] echo 00:12:45.476 Sorcerer 10.211.164.101 is alive 00:12:45.481 [Pipeline] httpRequest 00:12:45.485 HttpMethod: GET 00:12:45.485 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:12:45.486 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:12:45.487 Response Code: HTTP/1.1 200 OK 00:12:45.488 Success: Status code 200 is in the accepted range: 200,404 00:12:45.488 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:12:49.449 [Pipeline] sh 00:12:49.731 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:12:52.271 [Pipeline] sh 00:12:52.551 + git -C spdk log --oneline -n5 00:12:52.551 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:12:52.551 330a4f94d nvme: check pthread_mutex_destroy() return value 00:12:52.551 7b72c3ced nvme: add nvme_ctrlr_lock 00:12:52.551 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:12:52.551 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:12:52.570 [Pipeline] withCredentials 00:12:52.580 > git --version # timeout=10 00:12:52.592 > git --version # 'git version 2.39.2' 00:12:52.608 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:12:52.611 [Pipeline] { 00:12:52.622 [Pipeline] retry 00:12:52.624 [Pipeline] { 00:12:52.643 [Pipeline] sh 00:12:52.950 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:12:54.334 [Pipeline] } 00:12:54.357 [Pipeline] // retry 00:12:54.362 [Pipeline] } 00:12:54.385 [Pipeline] // withCredentials 00:12:54.395 [Pipeline] httpRequest 00:12:54.413 [Pipeline] echo 00:12:54.415 Sorcerer 10.211.164.101 is alive 00:12:54.424 [Pipeline] httpRequest 00:12:54.429 HttpMethod: GET 00:12:54.429 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:12:54.430 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:12:54.432 Response Code: HTTP/1.1 200 OK 00:12:54.432 Success: Status code 200 is in the accepted range: 200,404 00:12:54.433 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:12:55.660 [Pipeline] sh 00:12:55.939 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:12:57.854 [Pipeline] sh 00:12:58.129 + git -C dpdk log --oneline -n5 00:12:58.129 caf0f5d395 version: 22.11.4 00:12:58.129 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:12:58.129 dc9c799c7d vhost: fix missing spinlock unlock 00:12:58.129 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:12:58.129 6ef77f2a5e net/gve: fix RX buffer size alignment 00:12:58.140 [Pipeline] } 00:12:58.157 [Pipeline] // stage 00:12:58.167 [Pipeline] stage 00:12:58.170 [Pipeline] { (Prepare) 00:12:58.188 [Pipeline] writeFile 00:12:58.205 [Pipeline] sh 00:12:58.483 + logger -p user.info -t JENKINS-CI 00:12:58.495 [Pipeline] sh 00:12:58.773 + logger -p user.info -t JENKINS-CI 00:12:58.785 [Pipeline] sh 00:12:59.066 + cat autorun-spdk.conf 00:12:59.066 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:59.066 SPDK_TEST_NVMF=1 00:12:59.066 SPDK_TEST_NVME_CLI=1 00:12:59.066 SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:59.066 SPDK_TEST_NVMF_NICS=e810 00:12:59.066 SPDK_TEST_VFIOUSER=1 00:12:59.066 SPDK_RUN_UBSAN=1 00:12:59.066 NET_TYPE=phy 00:12:59.066 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:12:59.066 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:59.072 RUN_NIGHTLY=1 00:12:59.078 [Pipeline] readFile 00:12:59.106 [Pipeline] withEnv 00:12:59.108 [Pipeline] { 00:12:59.122 [Pipeline] sh 00:12:59.402 + set -ex 00:12:59.402 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:12:59.402 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:12:59.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:59.402 ++ SPDK_TEST_NVMF=1 00:12:59.402 ++ SPDK_TEST_NVME_CLI=1 00:12:59.402 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:59.402 ++ SPDK_TEST_NVMF_NICS=e810 00:12:59.402 ++ SPDK_TEST_VFIOUSER=1 00:12:59.402 ++ SPDK_RUN_UBSAN=1 00:12:59.402 ++ NET_TYPE=phy 00:12:59.402 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:12:59.402 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:59.402 ++ RUN_NIGHTLY=1 00:12:59.402 + case $SPDK_TEST_NVMF_NICS in 00:12:59.402 + DRIVERS=ice 00:12:59.402 + [[ tcp == \r\d\m\a ]] 00:12:59.402 + [[ -n ice ]] 00:12:59.402 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:12:59.402 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:12:59.402 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:12:59.402 rmmod: ERROR: Module irdma is not currently loaded 00:12:59.402 rmmod: ERROR: Module i40iw is not currently loaded 00:12:59.402 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:12:59.402 + true 00:12:59.402 + for D in $DRIVERS 00:12:59.402 + sudo modprobe ice 00:12:59.402 + exit 0 00:12:59.411 [Pipeline] } 00:12:59.428 [Pipeline] // withEnv 00:12:59.433 [Pipeline] } 00:12:59.451 [Pipeline] // stage 00:12:59.462 [Pipeline] catchError 00:12:59.464 [Pipeline] { 00:12:59.479 [Pipeline] timeout 00:12:59.479 Timeout set to expire in 50 min 00:12:59.481 [Pipeline] { 00:12:59.496 [Pipeline] stage 00:12:59.497 [Pipeline] { (Tests) 00:12:59.510 [Pipeline] sh 00:12:59.787 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:59.787 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:59.787 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:59.787 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:12:59.787 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:59.787 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:12:59.787 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:12:59.787 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:12:59.787 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:12:59.787 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:12:59.787 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:12:59.787 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:12:59.787 + source /etc/os-release 00:12:59.787 ++ NAME='Fedora Linux' 00:12:59.787 ++ VERSION='38 (Cloud Edition)' 00:12:59.787 ++ ID=fedora 00:12:59.787 ++ VERSION_ID=38 00:12:59.787 ++ VERSION_CODENAME= 00:12:59.787 ++ PLATFORM_ID=platform:f38 00:12:59.787 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:12:59.787 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:59.787 ++ LOGO=fedora-logo-icon 00:12:59.787 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:12:59.787 ++ HOME_URL=https://fedoraproject.org/ 00:12:59.787 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:12:59.787 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:59.787 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:59.787 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:59.787 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:12:59.787 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:59.787 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:12:59.787 ++ SUPPORT_END=2024-05-14 00:12:59.787 ++ VARIANT='Cloud Edition' 00:12:59.787 ++ VARIANT_ID=cloud 00:12:59.787 + uname -a 00:12:59.787 Linux spdk-gp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:12:59.787 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:13:01.161 Hugepages 00:13:01.161 node hugesize free / total 00:13:01.161 node0 1048576kB 0 / 0 00:13:01.161 node0 2048kB 0 / 0 00:13:01.161 node1 1048576kB 0 / 0 00:13:01.161 node1 2048kB 0 / 0 00:13:01.161 00:13:01.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:01.161 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:13:01.161 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:13:01.161 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:13:01.161 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:13:01.161 + rm -f /tmp/spdk-ld-path 00:13:01.161 + source autorun-spdk.conf 00:13:01.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.161 ++ SPDK_TEST_NVMF=1 00:13:01.161 ++ SPDK_TEST_NVME_CLI=1 00:13:01.161 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:01.161 ++ SPDK_TEST_NVMF_NICS=e810 00:13:01.161 ++ SPDK_TEST_VFIOUSER=1 00:13:01.161 ++ SPDK_RUN_UBSAN=1 00:13:01.161 ++ NET_TYPE=phy 00:13:01.161 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:13:01.161 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:01.161 ++ RUN_NIGHTLY=1 00:13:01.161 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:01.161 + [[ -n '' ]] 00:13:01.161 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:01.161 + for M in /var/spdk/build-*-manifest.txt 00:13:01.161 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:01.161 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:13:01.162 + for M in /var/spdk/build-*-manifest.txt 00:13:01.162 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:01.162 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:13:01.162 ++ uname 00:13:01.162 + [[ Linux == \L\i\n\u\x ]] 00:13:01.162 + sudo dmesg -T 00:13:01.162 + sudo dmesg --clear 00:13:01.162 + dmesg_pid=2546510 00:13:01.162 + sudo dmesg -Tw 00:13:01.162 + [[ Fedora Linux == FreeBSD ]] 00:13:01.162 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.162 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.162 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:01.162 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:13:01.162 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:13:01.162 + [[ -x /usr/src/fio-static/fio ]] 00:13:01.162 + export FIO_BIN=/usr/src/fio-static/fio 00:13:01.162 + FIO_BIN=/usr/src/fio-static/fio 00:13:01.162 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:01.162 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:01.162 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:01.162 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.162 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.162 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:01.162 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.162 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.162 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:13:01.162 Test configuration: 00:13:01.162 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.162 SPDK_TEST_NVMF=1 00:13:01.162 SPDK_TEST_NVME_CLI=1 00:13:01.162 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:01.162 SPDK_TEST_NVMF_NICS=e810 00:13:01.162 SPDK_TEST_VFIOUSER=1 00:13:01.162 SPDK_RUN_UBSAN=1 00:13:01.162 NET_TYPE=phy 00:13:01.162 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:13:01.162 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:01.162 RUN_NIGHTLY=1 16:26:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.162 16:26:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:01.162 16:26:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.162 16:26:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.162 16:26:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.162 16:26:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.162 16:26:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.162 16:26:20 -- paths/export.sh@5 -- $ export PATH 00:13:01.162 16:26:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.162 16:26:20 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:13:01.162 16:26:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:13:01.162 16:26:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721658380.XXXXXX 00:13:01.162 16:26:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721658380.QlsBJk 00:13:01.162 16:26:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:13:01.162 16:26:20 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:13:01.162 16:26:20 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:01.162 16:26:20 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:13:01.162 16:26:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:13:01.162 16:26:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:13:01.162 16:26:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:13:01.162 16:26:20 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:13:01.162 16:26:20 -- common/autotest_common.sh@10 -- $ set +x 00:13:01.162 16:26:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:13:01.162 16:26:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:13:01.162 16:26:20 -- pm/common@17 -- $ local monitor 00:13:01.162 16:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.162 16:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.162 16:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.162 16:26:20 -- pm/common@21 -- $ date +%s 00:13:01.162 16:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.162 16:26:20 -- pm/common@25 -- $ sleep 1 00:13:01.162 16:26:20 -- pm/common@21 -- $ date +%s 00:13:01.162 16:26:20 -- pm/common@21 -- $ date +%s 00:13:01.162 16:26:20 -- pm/common@21 -- $ date +%s 00:13:01.162 16:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721658380 00:13:01.162 16:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721658380 00:13:01.162 16:26:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721658380 00:13:01.162 16:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721658380 00:13:01.162 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721658380_collect-vmstat.pm.log 00:13:01.162 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721658380_collect-cpu-load.pm.log 00:13:01.162 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721658380_collect-cpu-temp.pm.log 00:13:01.162 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721658380_collect-bmc-pm.bmc.pm.log 00:13:02.096 16:26:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:13:02.096 16:26:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:02.096 16:26:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:02.096 16:26:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:02.096 16:26:21 -- spdk/autobuild.sh@16 -- $ date -u 00:13:02.096 Mon Jul 22 02:26:21 PM UTC 2024 00:13:02.096 16:26:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:02.096 v24.05-13-g5fa2f5086 00:13:02.096 16:26:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:13:02.096 16:26:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:02.096 16:26:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:02.096 16:26:21 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:13:02.096 16:26:21 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:13:02.096 16:26:21 -- common/autotest_common.sh@10 -- $ set +x 00:13:02.355 ************************************ 00:13:02.355 START TEST ubsan 00:13:02.355 ************************************ 00:13:02.355 16:26:21 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:13:02.355 using ubsan 00:13:02.355 00:13:02.355 real 0m0.000s 00:13:02.355 user 0m0.000s 00:13:02.355 sys 0m0.000s 00:13:02.355 16:26:21 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:13:02.355 16:26:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:02.355 ************************************ 00:13:02.355 END TEST ubsan 00:13:02.355 ************************************ 00:13:02.355 16:26:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:13:02.355 16:26:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:13:02.355 16:26:21 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:13:02.355 16:26:21 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:13:02.355 16:26:21 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:13:02.355 16:26:21 -- common/autotest_common.sh@10 -- $ set +x 00:13:02.355 ************************************ 00:13:02.355 START TEST build_native_dpdk 00:13:02.355 ************************************ 00:13:02.355 16:26:21 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:02.355 16:26:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:13:02.356 caf0f5d395 version: 22.11.4 00:13:02.356 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:13:02.356 dc9c799c7d vhost: fix missing spinlock unlock 00:13:02.356 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:13:02.356 6ef77f2a5e net/gve: fix RX buffer size alignment 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:13:02.356 16:26:21 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:13:02.356 patching file config/rte_config.h 00:13:02.356 Hunk #1 succeeded at 60 (offset 1 line). 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:13:02.356 16:26:21 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:13:06.540 The Meson build system 00:13:06.540 Version: 1.3.1 00:13:06.540 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:13:06.540 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:13:06.540 Build type: native build 00:13:06.540 Program cat found: YES (/usr/bin/cat) 00:13:06.540 Project name: DPDK 00:13:06.541 Project version: 22.11.4 00:13:06.541 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:13:06.541 C linker for the host machine: gcc ld.bfd 2.39-16 00:13:06.541 Host machine cpu family: x86_64 00:13:06.541 Host machine cpu: x86_64 00:13:06.541 Message: ## Building in Developer Mode ## 00:13:06.541 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:06.541 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:13:06.541 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:13:06.541 Program objdump found: YES (/usr/bin/objdump) 00:13:06.541 Program python3 found: YES (/usr/bin/python3) 00:13:06.541 Program cat found: YES (/usr/bin/cat) 00:13:06.541 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:13:06.541 Checking for size of "void *" : 8 00:13:06.541 Checking for size of "void *" : 8 (cached) 00:13:06.541 Library m found: YES 00:13:06.541 Library numa found: YES 00:13:06.541 Has header "numaif.h" : YES 00:13:06.541 Library fdt found: NO 00:13:06.541 Library execinfo found: NO 00:13:06.541 Has header "execinfo.h" : YES 00:13:06.541 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:13:06.541 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:06.541 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:06.541 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:06.541 Run-time dependency openssl found: YES 3.0.9 00:13:06.541 Run-time dependency libpcap found: YES 1.10.4 00:13:06.541 Has header "pcap.h" with dependency libpcap: YES 00:13:06.541 Compiler for C supports arguments -Wcast-qual: YES 00:13:06.541 Compiler for C supports arguments -Wdeprecated: YES 00:13:06.541 Compiler for C supports arguments -Wformat: YES 00:13:06.541 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:06.541 Compiler for C supports arguments -Wformat-security: NO 00:13:06.541 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:06.541 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:06.541 Compiler for C supports arguments -Wnested-externs: YES 00:13:06.541 Compiler for C supports arguments -Wold-style-definition: YES 00:13:06.541 Compiler for C supports arguments -Wpointer-arith: YES 00:13:06.541 Compiler for C supports arguments -Wsign-compare: YES 00:13:06.541 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:06.541 Compiler for C supports arguments -Wundef: YES 00:13:06.541 Compiler for C supports arguments -Wwrite-strings: YES 00:13:06.541 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:06.541 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:06.541 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:06.541 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:06.541 Compiler for C supports arguments -mavx512f: YES 00:13:06.541 Checking if "AVX512 checking" compiles: YES 00:13:06.541 Fetching value of define "__SSE4_2__" : 1 00:13:06.541 Fetching value of define "__AES__" : 1 00:13:06.541 Fetching value of define "__AVX__" : 1 00:13:06.541 Fetching value of define "__AVX2__" : (undefined) 00:13:06.541 Fetching value of define "__AVX512BW__" : (undefined) 00:13:06.541 Fetching value of define "__AVX512CD__" : (undefined) 00:13:06.541 Fetching value of define "__AVX512DQ__" : (undefined) 00:13:06.541 Fetching value of define "__AVX512F__" : (undefined) 00:13:06.541 Fetching value of define "__AVX512VL__" : (undefined) 00:13:06.541 Fetching value of define "__PCLMUL__" : 1 00:13:06.541 Fetching value of define "__RDRND__" : 1 00:13:06.541 Fetching value of define "__RDSEED__" : (undefined) 00:13:06.541 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:06.541 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:06.541 Message: lib/kvargs: Defining dependency "kvargs" 00:13:06.541 Message: lib/telemetry: Defining dependency "telemetry" 00:13:06.541 Checking for function "getentropy" : YES 00:13:06.541 Message: lib/eal: Defining dependency "eal" 00:13:06.541 Message: lib/ring: Defining dependency "ring" 00:13:06.541 Message: lib/rcu: Defining dependency "rcu" 00:13:06.541 Message: lib/mempool: Defining dependency "mempool" 00:13:06.541 Message: lib/mbuf: Defining dependency "mbuf" 00:13:06.541 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:06.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:06.541 Compiler for C supports arguments -mpclmul: YES 00:13:06.541 Compiler for C supports arguments -maes: YES 00:13:06.541 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:06.541 Compiler for C supports arguments -mavx512bw: YES 00:13:06.541 Compiler for C supports arguments -mavx512dq: YES 00:13:06.541 Compiler for C supports arguments -mavx512vl: YES 00:13:06.541 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:06.541 Compiler for C supports arguments -mavx2: YES 00:13:06.541 Compiler for C supports arguments -mavx: YES 00:13:06.541 Message: lib/net: Defining dependency "net" 00:13:06.541 Message: lib/meter: Defining dependency "meter" 00:13:06.541 Message: lib/ethdev: Defining dependency "ethdev" 00:13:06.541 Message: lib/pci: Defining dependency "pci" 00:13:06.541 Message: lib/cmdline: Defining dependency "cmdline" 00:13:06.541 Message: lib/metrics: Defining dependency "metrics" 00:13:06.541 Message: lib/hash: Defining dependency "hash" 00:13:06.541 Message: lib/timer: Defining dependency "timer" 00:13:06.541 Fetching value of define "__AVX2__" : (undefined) (cached) 00:13:06.541 Compiler for C supports arguments -mavx2: YES (cached) 00:13:06.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:13:06.541 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:13:06.541 Message: lib/acl: Defining dependency "acl" 00:13:06.541 Message: lib/bbdev: Defining dependency "bbdev" 00:13:06.541 Message: lib/bitratestats: Defining dependency "bitratestats" 00:13:06.541 Run-time dependency libelf found: YES 0.190 00:13:06.541 Message: lib/bpf: Defining dependency "bpf" 00:13:06.541 Message: lib/cfgfile: Defining dependency "cfgfile" 00:13:06.541 Message: lib/compressdev: Defining dependency "compressdev" 00:13:06.541 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:06.541 Message: lib/distributor: Defining dependency "distributor" 00:13:06.541 Message: lib/efd: Defining dependency "efd" 00:13:06.541 Message: lib/eventdev: Defining dependency "eventdev" 00:13:06.541 Message: lib/gpudev: Defining dependency "gpudev" 00:13:06.541 Message: lib/gro: Defining dependency "gro" 00:13:06.541 Message: lib/gso: Defining dependency "gso" 00:13:06.541 Message: lib/ip_frag: Defining dependency "ip_frag" 00:13:06.541 Message: lib/jobstats: Defining dependency "jobstats" 00:13:06.541 Message: lib/latencystats: Defining dependency "latencystats" 00:13:06.541 Message: lib/lpm: Defining dependency "lpm" 00:13:06.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512IFMA__" : (undefined) 00:13:06.541 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:13:06.541 Message: lib/member: Defining dependency "member" 00:13:06.541 Message: lib/pcapng: Defining dependency "pcapng" 00:13:06.541 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:06.541 Message: lib/power: Defining dependency "power" 00:13:06.541 Message: lib/rawdev: Defining dependency "rawdev" 00:13:06.541 Message: lib/regexdev: Defining dependency "regexdev" 00:13:06.541 Message: lib/dmadev: Defining dependency "dmadev" 00:13:06.541 Message: lib/rib: Defining dependency "rib" 00:13:06.541 Message: lib/reorder: Defining dependency "reorder" 00:13:06.541 Message: lib/sched: Defining dependency "sched" 00:13:06.541 Message: lib/security: Defining dependency "security" 00:13:06.541 Message: lib/stack: Defining dependency "stack" 00:13:06.541 Has header "linux/userfaultfd.h" : YES 00:13:06.541 Message: lib/vhost: Defining dependency "vhost" 00:13:06.541 Message: lib/ipsec: Defining dependency "ipsec" 00:13:06.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:06.541 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:13:06.541 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:13:06.541 Compiler for C supports arguments -mavx512bw: YES (cached) 00:13:06.541 Message: lib/fib: Defining dependency "fib" 00:13:06.541 Message: lib/port: Defining dependency "port" 00:13:06.541 Message: lib/pdump: Defining dependency "pdump" 00:13:06.541 Message: lib/table: Defining dependency "table" 00:13:06.541 Message: lib/pipeline: Defining dependency "pipeline" 00:13:06.541 Message: lib/graph: Defining dependency "graph" 00:13:06.541 Message: lib/node: Defining dependency "node" 00:13:06.541 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:06.541 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:06.541 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:06.541 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:06.541 Compiler for C supports arguments -Wno-sign-compare: YES 00:13:06.541 Compiler for C supports arguments -Wno-unused-value: YES 00:13:07.475 Compiler for C supports arguments -Wno-format: YES 00:13:07.475 Compiler for C supports arguments -Wno-format-security: YES 00:13:07.475 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:13:07.475 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:13:07.475 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:13:07.475 Compiler for C supports arguments -Wno-unused-parameter: YES 00:13:07.475 Fetching value of define "__AVX2__" : (undefined) (cached) 00:13:07.475 Compiler for C supports arguments -mavx2: YES (cached) 00:13:07.475 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:07.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:07.475 Compiler for C supports arguments -mavx512bw: YES (cached) 00:13:07.475 Compiler for C supports arguments -march=skylake-avx512: YES 00:13:07.475 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:13:07.475 Program doxygen found: YES (/usr/bin/doxygen) 00:13:07.475 Configuring doxy-api.conf using configuration 00:13:07.475 Program sphinx-build found: NO 00:13:07.475 Configuring rte_build_config.h using configuration 00:13:07.475 Message: 00:13:07.475 ================= 00:13:07.475 Applications Enabled 00:13:07.475 ================= 00:13:07.475 00:13:07.475 apps: 00:13:07.475 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:13:07.475 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:13:07.475 test-security-perf, 00:13:07.475 00:13:07.475 Message: 00:13:07.475 ================= 00:13:07.475 Libraries Enabled 00:13:07.475 ================= 00:13:07.475 00:13:07.475 libs: 00:13:07.475 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:13:07.475 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:13:07.475 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:13:07.476 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:13:07.476 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:13:07.476 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:13:07.476 table, pipeline, graph, node, 00:13:07.476 00:13:07.476 Message: 00:13:07.476 =============== 00:13:07.476 Drivers Enabled 00:13:07.476 =============== 00:13:07.476 00:13:07.476 common: 00:13:07.476 00:13:07.476 bus: 00:13:07.476 pci, vdev, 00:13:07.476 mempool: 00:13:07.476 ring, 00:13:07.476 dma: 00:13:07.476 00:13:07.476 net: 00:13:07.476 i40e, 00:13:07.476 raw: 00:13:07.476 00:13:07.476 crypto: 00:13:07.476 00:13:07.476 compress: 00:13:07.476 00:13:07.476 regex: 00:13:07.476 00:13:07.476 vdpa: 00:13:07.476 00:13:07.476 event: 00:13:07.476 00:13:07.476 baseband: 00:13:07.476 00:13:07.476 gpu: 00:13:07.476 00:13:07.476 00:13:07.476 Message: 00:13:07.476 ================= 00:13:07.476 Content Skipped 00:13:07.476 ================= 00:13:07.476 00:13:07.476 apps: 00:13:07.476 00:13:07.476 libs: 00:13:07.476 kni: explicitly disabled via build config (deprecated lib) 00:13:07.476 flow_classify: explicitly disabled via build config (deprecated lib) 00:13:07.476 00:13:07.476 drivers: 00:13:07.476 common/cpt: not in enabled drivers build config 00:13:07.476 common/dpaax: not in enabled drivers build config 00:13:07.476 common/iavf: not in enabled drivers build config 00:13:07.476 common/idpf: not in enabled drivers build config 00:13:07.476 common/mvep: not in enabled drivers build config 00:13:07.476 common/octeontx: not in enabled drivers build config 00:13:07.476 bus/auxiliary: not in enabled drivers build config 00:13:07.476 bus/dpaa: not in enabled drivers build config 00:13:07.476 bus/fslmc: not in enabled drivers build config 00:13:07.476 bus/ifpga: not in enabled drivers build config 00:13:07.476 bus/vmbus: not in enabled drivers build config 00:13:07.476 common/cnxk: not in enabled drivers build config 00:13:07.476 common/mlx5: not in enabled drivers build config 00:13:07.476 common/qat: not in enabled drivers build config 00:13:07.476 common/sfc_efx: not in enabled drivers build config 00:13:07.476 mempool/bucket: not in enabled drivers build config 00:13:07.476 mempool/cnxk: not in enabled drivers build config 00:13:07.476 mempool/dpaa: not in enabled drivers build config 00:13:07.476 mempool/dpaa2: not in enabled drivers build config 00:13:07.476 mempool/octeontx: not in enabled drivers build config 00:13:07.476 mempool/stack: not in enabled drivers build config 00:13:07.476 dma/cnxk: not in enabled drivers build config 00:13:07.476 dma/dpaa: not in enabled drivers build config 00:13:07.476 dma/dpaa2: not in enabled drivers build config 00:13:07.476 dma/hisilicon: not in enabled drivers build config 00:13:07.476 dma/idxd: not in enabled drivers build config 00:13:07.476 dma/ioat: not in enabled drivers build config 00:13:07.476 dma/skeleton: not in enabled drivers build config 00:13:07.476 net/af_packet: not in enabled drivers build config 00:13:07.476 net/af_xdp: not in enabled drivers build config 00:13:07.476 net/ark: not in enabled drivers build config 00:13:07.476 net/atlantic: not in enabled drivers build config 00:13:07.476 net/avp: not in enabled drivers build config 00:13:07.476 net/axgbe: not in enabled drivers build config 00:13:07.476 net/bnx2x: not in enabled drivers build config 00:13:07.476 net/bnxt: not in enabled drivers build config 00:13:07.476 net/bonding: not in enabled drivers build config 00:13:07.476 net/cnxk: not in enabled drivers build config 00:13:07.476 net/cxgbe: not in enabled drivers build config 00:13:07.476 net/dpaa: not in enabled drivers build config 00:13:07.476 net/dpaa2: not in enabled drivers build config 00:13:07.476 net/e1000: not in enabled drivers build config 00:13:07.476 net/ena: not in enabled drivers build config 00:13:07.476 net/enetc: not in enabled drivers build config 00:13:07.476 net/enetfec: not in enabled drivers build config 00:13:07.476 net/enic: not in enabled drivers build config 00:13:07.476 net/failsafe: not in enabled drivers build config 00:13:07.476 net/fm10k: not in enabled drivers build config 00:13:07.476 net/gve: not in enabled drivers build config 00:13:07.476 net/hinic: not in enabled drivers build config 00:13:07.476 net/hns3: not in enabled drivers build config 00:13:07.476 net/iavf: not in enabled drivers build config 00:13:07.476 net/ice: not in enabled drivers build config 00:13:07.476 net/idpf: not in enabled drivers build config 00:13:07.476 net/igc: not in enabled drivers build config 00:13:07.476 net/ionic: not in enabled drivers build config 00:13:07.476 net/ipn3ke: not in enabled drivers build config 00:13:07.476 net/ixgbe: not in enabled drivers build config 00:13:07.476 net/kni: not in enabled drivers build config 00:13:07.476 net/liquidio: not in enabled drivers build config 00:13:07.476 net/mana: not in enabled drivers build config 00:13:07.476 net/memif: not in enabled drivers build config 00:13:07.476 net/mlx4: not in enabled drivers build config 00:13:07.476 net/mlx5: not in enabled drivers build config 00:13:07.476 net/mvneta: not in enabled drivers build config 00:13:07.476 net/mvpp2: not in enabled drivers build config 00:13:07.476 net/netvsc: not in enabled drivers build config 00:13:07.476 net/nfb: not in enabled drivers build config 00:13:07.476 net/nfp: not in enabled drivers build config 00:13:07.476 net/ngbe: not in enabled drivers build config 00:13:07.476 net/null: not in enabled drivers build config 00:13:07.476 net/octeontx: not in enabled drivers build config 00:13:07.476 net/octeon_ep: not in enabled drivers build config 00:13:07.476 net/pcap: not in enabled drivers build config 00:13:07.476 net/pfe: not in enabled drivers build config 00:13:07.476 net/qede: not in enabled drivers build config 00:13:07.476 net/ring: not in enabled drivers build config 00:13:07.476 net/sfc: not in enabled drivers build config 00:13:07.476 net/softnic: not in enabled drivers build config 00:13:07.476 net/tap: not in enabled drivers build config 00:13:07.476 net/thunderx: not in enabled drivers build config 00:13:07.476 net/txgbe: not in enabled drivers build config 00:13:07.476 net/vdev_netvsc: not in enabled drivers build config 00:13:07.476 net/vhost: not in enabled drivers build config 00:13:07.476 net/virtio: not in enabled drivers build config 00:13:07.476 net/vmxnet3: not in enabled drivers build config 00:13:07.476 raw/cnxk_bphy: not in enabled drivers build config 00:13:07.476 raw/cnxk_gpio: not in enabled drivers build config 00:13:07.476 raw/dpaa2_cmdif: not in enabled drivers build config 00:13:07.476 raw/ifpga: not in enabled drivers build config 00:13:07.476 raw/ntb: not in enabled drivers build config 00:13:07.476 raw/skeleton: not in enabled drivers build config 00:13:07.476 crypto/armv8: not in enabled drivers build config 00:13:07.476 crypto/bcmfs: not in enabled drivers build config 00:13:07.476 crypto/caam_jr: not in enabled drivers build config 00:13:07.476 crypto/ccp: not in enabled drivers build config 00:13:07.476 crypto/cnxk: not in enabled drivers build config 00:13:07.476 crypto/dpaa_sec: not in enabled drivers build config 00:13:07.476 crypto/dpaa2_sec: not in enabled drivers build config 00:13:07.476 crypto/ipsec_mb: not in enabled drivers build config 00:13:07.476 crypto/mlx5: not in enabled drivers build config 00:13:07.476 crypto/mvsam: not in enabled drivers build config 00:13:07.476 crypto/nitrox: not in enabled drivers build config 00:13:07.476 crypto/null: not in enabled drivers build config 00:13:07.476 crypto/octeontx: not in enabled drivers build config 00:13:07.476 crypto/openssl: not in enabled drivers build config 00:13:07.476 crypto/scheduler: not in enabled drivers build config 00:13:07.476 crypto/uadk: not in enabled drivers build config 00:13:07.476 crypto/virtio: not in enabled drivers build config 00:13:07.476 compress/isal: not in enabled drivers build config 00:13:07.476 compress/mlx5: not in enabled drivers build config 00:13:07.476 compress/octeontx: not in enabled drivers build config 00:13:07.476 compress/zlib: not in enabled drivers build config 00:13:07.476 regex/mlx5: not in enabled drivers build config 00:13:07.476 regex/cn9k: not in enabled drivers build config 00:13:07.476 vdpa/ifc: not in enabled drivers build config 00:13:07.476 vdpa/mlx5: not in enabled drivers build config 00:13:07.476 vdpa/sfc: not in enabled drivers build config 00:13:07.476 event/cnxk: not in enabled drivers build config 00:13:07.476 event/dlb2: not in enabled drivers build config 00:13:07.476 event/dpaa: not in enabled drivers build config 00:13:07.476 event/dpaa2: not in enabled drivers build config 00:13:07.476 event/dsw: not in enabled drivers build config 00:13:07.476 event/opdl: not in enabled drivers build config 00:13:07.476 event/skeleton: not in enabled drivers build config 00:13:07.476 event/sw: not in enabled drivers build config 00:13:07.476 event/octeontx: not in enabled drivers build config 00:13:07.476 baseband/acc: not in enabled drivers build config 00:13:07.476 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:13:07.476 baseband/fpga_lte_fec: not in enabled drivers build config 00:13:07.476 baseband/la12xx: not in enabled drivers build config 00:13:07.476 baseband/null: not in enabled drivers build config 00:13:07.476 baseband/turbo_sw: not in enabled drivers build config 00:13:07.476 gpu/cuda: not in enabled drivers build config 00:13:07.476 00:13:07.476 00:13:07.476 Build targets in project: 316 00:13:07.476 00:13:07.477 DPDK 22.11.4 00:13:07.477 00:13:07.477 User defined options 00:13:07.477 libdir : lib 00:13:07.477 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:07.477 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:13:07.477 c_link_args : 00:13:07.477 enable_docs : false 00:13:07.477 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:13:07.477 enable_kmods : false 00:13:07.477 machine : native 00:13:07.477 tests : false 00:13:07.477 00:13:07.477 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:07.477 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:13:07.745 16:26:27 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:13:07.745 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:13:07.745 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:13:07.745 [2/745] Generating lib/rte_telemetry_def with a custom command 00:13:07.745 [3/745] Generating lib/rte_kvargs_def with a custom command 00:13:07.745 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:13:07.745 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:07.745 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:07.745 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:07.745 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:07.745 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:07.745 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:07.745 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:07.745 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:08.006 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:08.006 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:08.006 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:08.006 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:08.006 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:08.006 [18/745] Linking static target lib/librte_kvargs.a 00:13:08.006 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:08.006 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:08.006 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:08.006 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:08.006 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:08.006 [24/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:08.006 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:08.006 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:08.006 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:08.006 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:08.006 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:08.006 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:08.006 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:08.006 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:13:08.006 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:08.006 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:08.006 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:08.006 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:08.006 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:08.006 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:08.006 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:08.006 [40/745] Generating lib/rte_eal_mingw with a custom command 00:13:08.006 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:08.006 [42/745] Generating lib/rte_eal_def with a custom command 00:13:08.006 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:08.006 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:08.006 [45/745] Generating lib/rte_ring_def with a custom command 00:13:08.006 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:08.006 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:08.006 [48/745] Generating lib/rte_ring_mingw with a custom command 00:13:08.006 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:08.006 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:08.006 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:08.006 [52/745] Generating lib/rte_rcu_def with a custom command 00:13:08.006 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:13:08.006 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:08.006 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:08.006 [56/745] Generating lib/rte_mempool_def with a custom command 00:13:08.006 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:08.006 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:13:08.006 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:13:08.270 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:08.270 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:08.270 [62/745] Generating lib/rte_mbuf_def with a custom command 00:13:08.270 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:08.270 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:13:08.270 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:08.270 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:08.270 [67/745] Generating lib/rte_net_mingw with a custom command 00:13:08.270 [68/745] Generating lib/rte_net_def with a custom command 00:13:08.270 [69/745] Generating lib/rte_meter_mingw with a custom command 00:13:08.270 [70/745] Generating lib/rte_meter_def with a custom command 00:13:08.270 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:08.270 [72/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:08.270 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:08.270 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:08.270 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:08.270 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:08.270 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:08.270 [78/745] Generating lib/rte_ethdev_def with a custom command 00:13:08.270 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.270 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:08.270 [81/745] Linking static target lib/librte_ring.a 00:13:08.270 [82/745] Linking target lib/librte_kvargs.so.23.0 00:13:08.530 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:08.530 [84/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:08.530 [85/745] Generating lib/rte_ethdev_mingw with a custom command 00:13:08.530 [86/745] Linking static target lib/librte_meter.a 00:13:08.530 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:08.530 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:08.530 [89/745] Generating lib/rte_pci_def with a custom command 00:13:08.530 [90/745] Generating lib/rte_pci_mingw with a custom command 00:13:08.530 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:08.530 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:08.530 [93/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:08.530 [94/745] Linking static target lib/librte_pci.a 00:13:08.530 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:08.530 [96/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:13:08.790 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:08.790 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:08.790 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.790 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:08.790 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:08.790 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:08.790 [103/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.790 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:08.790 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:08.791 [106/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.791 [107/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:08.791 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:08.791 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:08.791 [110/745] Generating lib/rte_cmdline_def with a custom command 00:13:08.791 [111/745] Linking static target lib/librte_telemetry.a 00:13:08.791 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:13:09.053 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:09.053 [114/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:09.053 [115/745] Generating lib/rte_metrics_def with a custom command 00:13:09.053 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:13:09.053 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:09.053 [118/745] Generating lib/rte_hash_mingw with a custom command 00:13:09.053 [119/745] Generating lib/rte_hash_def with a custom command 00:13:09.053 [120/745] Generating lib/rte_timer_def with a custom command 00:13:09.053 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:09.053 [122/745] Generating lib/rte_timer_mingw with a custom command 00:13:09.053 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:09.053 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:13:09.315 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:09.315 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:09.315 [127/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:09.315 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:09.315 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:09.315 [130/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:09.315 [131/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:09.315 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:09.315 [133/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:09.315 [134/745] Generating lib/rte_acl_def with a custom command 00:13:09.315 [135/745] Generating lib/rte_acl_mingw with a custom command 00:13:09.315 [136/745] Generating lib/rte_bbdev_def with a custom command 00:13:09.315 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:13:09.315 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:13:09.315 [139/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:09.315 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:13:09.315 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:09.315 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:09.581 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:09.581 [144/745] Linking target lib/librte_telemetry.so.23.0 00:13:09.581 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:09.581 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:09.581 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:09.581 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:09.581 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:09.581 [150/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:09.581 [151/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:09.581 [152/745] Generating lib/rte_bpf_def with a custom command 00:13:09.581 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:13:09.581 [154/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:09.581 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:13:09.581 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:09.581 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:13:09.581 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:13:09.845 [159/745] Generating lib/rte_compressdev_def with a custom command 00:13:09.845 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:13:09.845 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:09.845 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:09.845 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:09.845 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:09.845 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:13:09.845 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:13:09.845 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:09.845 [168/745] Linking static target lib/librte_rcu.a 00:13:09.845 [169/745] Generating lib/rte_distributor_mingw with a custom command 00:13:09.845 [170/745] Generating lib/rte_distributor_def with a custom command 00:13:09.845 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:09.845 [172/745] Linking static target lib/librte_cmdline.a 00:13:09.845 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:09.845 [174/745] Linking static target lib/librte_timer.a 00:13:09.845 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:09.845 [176/745] Generating lib/rte_efd_def with a custom command 00:13:09.845 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:09.845 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:09.845 [179/745] Linking static target lib/librte_net.a 00:13:09.845 [180/745] Generating lib/rte_efd_mingw with a custom command 00:13:10.105 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:10.105 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:13:10.105 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:13:10.105 [184/745] Linking static target lib/librte_metrics.a 00:13:10.105 [185/745] Linking static target lib/librte_cfgfile.a 00:13:10.105 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:10.105 [187/745] Linking static target lib/librte_mempool.a 00:13:10.373 [188/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.373 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.373 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:13:10.373 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.373 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:10.373 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:10.632 [194/745] Generating lib/rte_eventdev_def with a custom command 00:13:10.632 [195/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:13:10.632 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:13:10.632 [197/745] Linking static target lib/librte_eal.a 00:13:10.632 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:13:10.632 [199/745] Generating lib/rte_eventdev_mingw with a custom command 00:13:10.632 [200/745] Generating lib/rte_gpudev_def with a custom command 00:13:10.632 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:13:10.632 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:13:10.632 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:13:10.632 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:13:10.632 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.632 [206/745] Linking static target lib/librte_bitratestats.a 00:13:10.632 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:13:10.632 [208/745] Generating lib/rte_gro_def with a custom command 00:13:10.632 [209/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.632 [210/745] Generating lib/rte_gro_mingw with a custom command 00:13:10.893 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:13:10.893 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:13:10.893 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:10.893 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:10.893 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:13:10.893 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.893 [217/745] Generating lib/rte_gso_mingw with a custom command 00:13:11.155 [218/745] Generating lib/rte_gso_def with a custom command 00:13:11.155 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:13:11.155 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:13:11.155 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:11.155 [222/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:13:11.155 [223/745] Generating lib/rte_ip_frag_def with a custom command 00:13:11.155 [224/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:13:11.155 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:13:11.155 [226/745] Linking static target lib/librte_bbdev.a 00:13:11.416 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:13:11.416 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:11.416 [229/745] Generating lib/rte_jobstats_def with a custom command 00:13:11.416 [230/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.416 [231/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.416 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:13:11.416 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:13:11.416 [234/745] Generating lib/rte_latencystats_def with a custom command 00:13:11.416 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:13:11.416 [236/745] Generating lib/rte_lpm_def with a custom command 00:13:11.416 [237/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:13:11.416 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:11.416 [239/745] Linking static target lib/librte_compressdev.a 00:13:11.416 [240/745] Generating lib/rte_lpm_mingw with a custom command 00:13:11.416 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:13:11.416 [242/745] Linking static target lib/librte_jobstats.a 00:13:11.679 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:13:11.679 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:11.942 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:13:11.943 [246/745] Linking static target lib/librte_distributor.a 00:13:11.943 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:13:11.943 [248/745] Generating lib/rte_member_def with a custom command 00:13:11.943 [249/745] Generating lib/rte_member_mingw with a custom command 00:13:11.943 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:13:11.943 [251/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.943 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:13:11.943 [253/745] Generating lib/rte_pcapng_def with a custom command 00:13:11.943 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:13:12.210 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:13:12.210 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:13:12.210 [257/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:13:12.210 [258/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.210 [259/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:12.210 [260/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:12.210 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:13:12.210 [262/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:13:12.210 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:13:12.210 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:13:12.210 [265/745] Linking static target lib/librte_bpf.a 00:13:12.210 [266/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.210 [267/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:12.210 [268/745] Generating lib/rte_power_mingw with a custom command 00:13:12.210 [269/745] Generating lib/rte_power_def with a custom command 00:13:12.210 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:13:12.210 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:13:12.210 [272/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:13:12.471 [273/745] Linking static target lib/librte_gro.a 00:13:12.471 [274/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:13:12.471 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:13:12.471 [276/745] Generating lib/rte_rawdev_def with a custom command 00:13:12.471 [277/745] Linking static target lib/librte_gpudev.a 00:13:12.471 [278/745] Generating lib/rte_regexdev_def with a custom command 00:13:12.471 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:13:12.471 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:12.471 [281/745] Generating lib/rte_dmadev_def with a custom command 00:13:12.471 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:13:12.471 [283/745] Generating lib/rte_rib_def with a custom command 00:13:12.471 [284/745] Generating lib/rte_rib_mingw with a custom command 00:13:12.471 [285/745] Generating lib/rte_reorder_def with a custom command 00:13:12.731 [286/745] Generating lib/rte_reorder_mingw with a custom command 00:13:12.731 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:13:12.731 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:13:12.731 [289/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.731 [290/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.731 [291/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:13:12.731 [292/745] Generating lib/rte_sched_def with a custom command 00:13:12.731 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:13:12.731 [294/745] Generating lib/rte_sched_mingw with a custom command 00:13:12.731 [295/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:13:12.731 [296/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:13:12.731 [297/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:13:12.731 [298/745] Generating lib/rte_security_def with a custom command 00:13:12.731 [299/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:13:12.731 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:13:12.998 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:13:12.998 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:13:12.998 [303/745] Generating lib/rte_security_mingw with a custom command 00:13:12.998 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:13:12.998 [305/745] Linking static target lib/librte_latencystats.a 00:13:12.998 [306/745] Generating lib/rte_stack_def with a custom command 00:13:12.998 [307/745] Generating lib/rte_stack_mingw with a custom command 00:13:12.998 [308/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.998 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:13:12.998 [310/745] Linking static target lib/librte_rawdev.a 00:13:12.998 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:13:12.998 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:13:12.998 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:13:12.998 [314/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:13:12.998 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:13:12.998 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:13:12.998 [317/745] Linking static target lib/librte_stack.a 00:13:12.998 [318/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:13:12.998 [319/745] Generating lib/rte_vhost_def with a custom command 00:13:12.998 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:13:12.998 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:13.261 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:13.261 [323/745] Linking static target lib/librte_dmadev.a 00:13:13.261 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:13:13.261 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.261 [326/745] Linking static target lib/librte_ip_frag.a 00:13:13.261 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:13:13.261 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:13:13.527 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.527 [330/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:13:13.527 [331/745] Generating lib/rte_ipsec_def with a custom command 00:13:13.527 [332/745] Generating lib/rte_ipsec_mingw with a custom command 00:13:13.527 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:13:13.788 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.788 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.788 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.788 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:13:13.788 [338/745] Generating lib/rte_fib_def with a custom command 00:13:13.788 [339/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:13.788 [340/745] Generating lib/rte_fib_mingw with a custom command 00:13:13.788 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:13:13.788 [342/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:13.788 [343/745] Linking static target lib/librte_gso.a 00:13:13.788 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:13:13.788 [345/745] Linking static target lib/librte_regexdev.a 00:13:14.055 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.055 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:13:14.055 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:13:14.055 [349/745] Linking static target lib/librte_efd.a 00:13:14.055 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.055 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:13:14.055 [352/745] Linking static target lib/librte_pcapng.a 00:13:14.318 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:13:14.318 [354/745] Linking static target lib/librte_lpm.a 00:13:14.318 [355/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:14.318 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:13:14.318 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:13:14.318 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:14.318 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:14.318 [360/745] Linking static target lib/librte_reorder.a 00:13:14.583 [361/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.583 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:14.583 [363/745] Generating lib/rte_port_def with a custom command 00:13:14.583 [364/745] Generating lib/rte_port_mingw with a custom command 00:13:14.583 [365/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:13:14.583 [366/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.583 [367/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:14.583 [368/745] Generating lib/rte_pdump_def with a custom command 00:13:14.583 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:13:14.583 [370/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:13:14.583 [371/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:13:14.583 [372/745] Linking static target lib/acl/libavx2_tmp.a 00:13:14.583 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:13:14.849 [374/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:13:14.849 [375/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:14.849 [376/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:13:14.849 [377/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:13:14.849 [378/745] Linking static target lib/librte_security.a 00:13:14.849 [379/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:13:14.849 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:14.849 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.849 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:13:14.849 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:14.849 [384/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.849 [385/745] Linking static target lib/librte_power.a 00:13:15.113 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:13:15.113 [387/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.113 [388/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:13:15.113 [389/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:15.113 [390/745] Linking static target lib/librte_rib.a 00:13:15.113 [391/745] Linking static target lib/librte_hash.a 00:13:15.113 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:13:15.375 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:13:15.375 [394/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:13:15.375 [395/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:13:15.375 [396/745] Linking static target lib/acl/libavx512_tmp.a 00:13:15.375 [397/745] Generating lib/rte_table_def with a custom command 00:13:15.375 [398/745] Linking static target lib/librte_acl.a 00:13:15.375 [399/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.375 [400/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:15.375 [401/745] Generating lib/rte_table_mingw with a custom command 00:13:15.375 [402/745] Linking static target lib/librte_ethdev.a 00:13:15.637 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:13:15.905 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.905 [405/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:15.905 [406/745] Linking static target lib/librte_mbuf.a 00:13:15.905 [407/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.905 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:13:15.905 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:13:15.905 [410/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:15.905 [411/745] Generating lib/rte_pipeline_def with a custom command 00:13:15.905 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:13:16.175 [413/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:13:16.175 [414/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:13:16.175 [415/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:13:16.175 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:13:16.175 [417/745] Generating lib/rte_pipeline_mingw with a custom command 00:13:16.175 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:13:16.175 [419/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:13:16.175 [420/745] Generating lib/rte_graph_def with a custom command 00:13:16.175 [421/745] Generating lib/rte_graph_mingw with a custom command 00:13:16.175 [422/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.175 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:13:16.175 [424/745] Linking static target lib/librte_fib.a 00:13:16.435 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:13:16.435 [426/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:13:16.435 [427/745] Linking static target lib/librte_eventdev.a 00:13:16.435 [428/745] Linking static target lib/librte_member.a 00:13:16.435 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:13:16.435 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:13:16.435 [431/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.435 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:13:16.435 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:13:16.435 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:13:16.435 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:13:16.435 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:13:16.435 [437/745] Generating lib/rte_node_def with a custom command 00:13:16.697 [438/745] Generating lib/rte_node_mingw with a custom command 00:13:16.697 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:13:16.697 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:16.697 [441/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:13:16.697 [442/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.697 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:13:16.697 [444/745] Generating drivers/rte_bus_pci_def with a custom command 00:13:16.697 [445/745] Linking static target lib/librte_sched.a 00:13:16.697 [446/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.966 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.966 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:13:16.966 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:13:16.966 [450/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:16.966 [451/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:16.966 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:13:16.966 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:13:16.966 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:16.966 [455/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:13:16.966 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:13:16.966 [457/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:13:16.966 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:16.966 [459/745] Linking static target lib/librte_cryptodev.a 00:13:16.966 [460/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:13:17.232 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:13:17.232 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:17.232 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:13:17.232 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:13:17.232 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:17.232 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:13:17.232 [467/745] Linking static target lib/librte_pdump.a 00:13:17.232 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:13:17.232 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:13:17.232 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:17.232 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:13:17.494 [472/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:17.494 [473/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:13:17.494 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:13:17.494 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:17.494 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:13:17.494 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:13:17.494 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:13:17.494 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:13:17.494 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:13:17.757 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:13:17.757 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:13:17.757 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:13:17.757 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:17.757 [485/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:13:17.757 [486/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:17.757 [487/745] Linking static target drivers/librte_bus_vdev.a 00:13:17.758 [488/745] Linking static target lib/librte_table.a 00:13:17.758 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:13:17.758 [490/745] Linking static target lib/librte_ipsec.a 00:13:18.025 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:13:18.025 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:18.025 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:18.025 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:18.290 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.290 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:13:18.290 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:13:18.290 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:13:18.290 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:13:18.290 [500/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:13:18.290 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:13:18.553 [502/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.553 [503/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:13:18.553 [504/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:18.553 [505/745] Linking static target lib/librte_graph.a 00:13:18.553 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:13:18.553 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:18.553 [508/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:13:18.553 [509/745] Linking static target drivers/librte_bus_pci.a 00:13:18.553 [510/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:18.553 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:13:18.553 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:13:18.813 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:13:18.813 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:13:19.079 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:13:19.079 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:19.343 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:13:19.343 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:13:19.343 [519/745] Linking static target lib/librte_port.a 00:13:19.343 [520/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:19.343 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:13:19.614 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:19.614 [523/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:19.614 [524/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:13:19.614 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:13:19.614 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:13:19.874 [527/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:19.874 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:13:19.874 [529/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:19.874 [530/745] Linking static target drivers/librte_mempool_ring.a 00:13:19.874 [531/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:13:19.874 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:19.874 [533/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:13:20.140 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:13:20.140 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:13:20.140 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:13:20.140 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:13:20.140 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:13:20.140 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.406 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:13:20.406 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.669 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:13:20.669 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:13:20.669 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:13:20.669 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:13:20.934 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:13:20.934 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:13:20.934 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:13:20.934 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:13:20.934 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:13:21.197 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:13:21.463 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:13:21.463 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:13:21.744 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:13:21.744 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:13:21.744 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:13:21.744 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:13:21.744 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:13:22.010 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:13:22.010 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:13:22.271 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:13:22.271 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:13:22.271 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:13:22.271 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:13:22.271 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:13:22.271 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:13:22.271 [567/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:13:22.271 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:13:22.271 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:13:22.539 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:13:22.539 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:13:22.800 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:13:22.800 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:13:22.800 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:13:22.800 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:13:23.063 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:13:23.063 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:13:23.063 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:13:23.063 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:13:23.063 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:13:23.063 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:13:23.063 [582/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:23.329 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:13:23.329 [584/745] Linking target lib/librte_eal.so.23.0 00:13:23.329 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:13:23.329 [586/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:23.329 [587/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:13:23.591 [588/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:13:23.591 [589/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:13:23.591 [590/745] Linking target lib/librte_ring.so.23.0 00:13:23.591 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:13:23.851 [592/745] Linking target lib/librte_meter.so.23.0 00:13:23.851 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:13:23.851 [594/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:13:23.851 [595/745] Linking target lib/librte_rcu.so.23.0 00:13:24.112 [596/745] Linking target lib/librte_mempool.so.23.0 00:13:24.112 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:13:24.112 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:13:24.112 [599/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:13:24.112 [600/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:13:24.112 [601/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:13:24.112 [602/745] Linking target lib/librte_timer.so.23.0 00:13:24.112 [603/745] Linking target lib/librte_pci.so.23.0 00:13:24.112 [604/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:13:24.112 [605/745] Linking target lib/librte_cfgfile.so.23.0 00:13:24.112 [606/745] Linking target lib/librte_jobstats.so.23.0 00:13:24.112 [607/745] Linking target lib/librte_acl.so.23.0 00:13:24.112 [608/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:13:24.112 [609/745] Linking target lib/librte_rawdev.so.23.0 00:13:24.112 [610/745] Linking target lib/librte_stack.so.23.0 00:13:24.112 [611/745] Linking target lib/librte_dmadev.so.23.0 00:13:24.112 [612/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:13:24.112 [613/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:13:24.112 [614/745] Linking target lib/librte_graph.so.23.0 00:13:24.371 [615/745] Linking target drivers/librte_bus_vdev.so.23.0 00:13:24.371 [616/745] Linking target lib/librte_rib.so.23.0 00:13:24.371 [617/745] Linking target drivers/librte_mempool_ring.so.23.0 00:13:24.371 [618/745] Linking target lib/librte_mbuf.so.23.0 00:13:24.371 [619/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:13:24.371 [620/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:13:24.371 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:13:24.371 [622/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:13:24.371 [623/745] Linking target drivers/librte_bus_pci.so.23.0 00:13:24.371 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:13:24.371 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:13:24.371 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:13:24.371 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:13:24.371 [628/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:13:24.371 [629/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:13:24.630 [630/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:13:24.630 [631/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:13:24.630 [632/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:13:24.630 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:13:24.630 [634/745] Linking target lib/librte_reorder.so.23.0 00:13:24.630 [635/745] Linking target lib/librte_gpudev.so.23.0 00:13:24.630 [636/745] Linking target lib/librte_net.so.23.0 00:13:24.630 [637/745] Linking target lib/librte_distributor.so.23.0 00:13:24.630 [638/745] Linking target lib/librte_compressdev.so.23.0 00:13:24.630 [639/745] Linking target lib/librte_bbdev.so.23.0 00:13:24.630 [640/745] Linking target lib/librte_regexdev.so.23.0 00:13:24.630 [641/745] Linking target lib/librte_sched.so.23.0 00:13:24.630 [642/745] Linking target lib/librte_cryptodev.so.23.0 00:13:24.630 [643/745] Linking target lib/librte_fib.so.23.0 00:13:24.630 [644/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:13:24.630 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:13:24.630 [646/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:13:24.630 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:13:24.630 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:13:24.630 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:13:24.889 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:13:24.889 [651/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:13:24.889 [652/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:13:24.889 [653/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:13:24.889 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:13:24.889 [655/745] Linking target lib/librte_security.so.23.0 00:13:24.889 [656/745] Linking target lib/librte_hash.so.23.0 00:13:24.889 [657/745] Linking target lib/librte_cmdline.so.23.0 00:13:24.889 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:13:24.889 [659/745] Linking target lib/librte_ethdev.so.23.0 00:13:24.889 [660/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:13:24.889 [661/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:13:24.889 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:13:24.889 [663/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:13:25.148 [664/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:13:25.148 [665/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:13:25.148 [666/745] Linking target lib/librte_efd.so.23.0 00:13:25.148 [667/745] Linking target lib/librte_ipsec.so.23.0 00:13:25.148 [668/745] Linking target lib/librte_lpm.so.23.0 00:13:25.148 [669/745] Linking target lib/librte_member.so.23.0 00:13:25.148 [670/745] Linking target lib/librte_pcapng.so.23.0 00:13:25.148 [671/745] Linking target lib/librte_gro.so.23.0 00:13:25.148 [672/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:13:25.148 [673/745] Linking target lib/librte_ip_frag.so.23.0 00:13:25.148 [674/745] Linking target lib/librte_metrics.so.23.0 00:13:25.148 [675/745] Linking target lib/librte_bpf.so.23.0 00:13:25.148 [676/745] Linking target lib/librte_gso.so.23.0 00:13:25.148 [677/745] Linking target lib/librte_power.so.23.0 00:13:25.148 [678/745] Linking target lib/librte_eventdev.so.23.0 00:13:25.148 [679/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:13:25.148 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:13:25.148 [681/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:13:25.148 [682/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:13:25.148 [683/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:13:25.148 [684/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:13:25.406 [685/745] Linking target lib/librte_bitratestats.so.23.0 00:13:25.406 [686/745] Linking target lib/librte_latencystats.so.23.0 00:13:25.406 [687/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:13:25.406 [688/745] Linking target lib/librte_port.so.23.0 00:13:25.406 [689/745] Linking target lib/librte_pdump.so.23.0 00:13:25.406 [690/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:13:25.406 [691/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:13:25.406 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:13:25.406 [693/745] Linking target lib/librte_table.so.23.0 00:13:25.665 [694/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:13:25.665 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:13:25.665 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:13:25.665 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:13:25.665 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:13:26.601 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:13:26.601 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:13:26.601 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:13:26.601 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:13:26.601 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:13:26.601 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:13:26.859 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:13:26.859 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:13:26.859 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:13:26.859 [708/745] Linking static target drivers/librte_net_i40e.a 00:13:27.117 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:13:27.117 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:13:27.375 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:13:27.375 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:13:28.310 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:13:28.310 [714/745] Linking static target lib/librte_node.a 00:13:28.310 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:13:28.310 [716/745] Linking target lib/librte_node.so.23.0 00:13:28.569 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:13:29.504 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:13:29.764 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:13:37.870 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:14:09.944 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:14:09.944 [722/745] Linking static target lib/librte_vhost.a 00:14:11.318 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:11.318 [724/745] Linking target lib/librte_vhost.so.23.0 00:14:21.281 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:14:21.281 [726/745] Linking static target lib/librte_pipeline.a 00:14:21.281 [727/745] Linking target app/dpdk-dumpcap 00:14:21.281 [728/745] Linking target app/dpdk-test-fib 00:14:21.281 [729/745] Linking target app/dpdk-test-flow-perf 00:14:21.281 [730/745] Linking target app/dpdk-test-pipeline 00:14:21.281 [731/745] Linking target app/dpdk-test-sad 00:14:21.281 [732/745] Linking target app/dpdk-test-security-perf 00:14:21.281 [733/745] Linking target app/dpdk-pdump 00:14:21.281 [734/745] Linking target app/dpdk-test-acl 00:14:21.281 [735/745] Linking target app/dpdk-test-bbdev 00:14:21.281 [736/745] Linking target app/dpdk-test-gpudev 00:14:21.281 [737/745] Linking target app/dpdk-proc-info 00:14:21.281 [738/745] Linking target app/dpdk-test-regex 00:14:21.281 [739/745] Linking target app/dpdk-test-eventdev 00:14:21.281 [740/745] Linking target app/dpdk-test-crypto-perf 00:14:21.281 [741/745] Linking target app/dpdk-test-compress-perf 00:14:21.281 [742/745] Linking target app/dpdk-test-cmdline 00:14:21.281 [743/745] Linking target app/dpdk-testpmd 00:14:22.655 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:22.655 [745/745] Linking target lib/librte_pipeline.so.23.0 00:14:22.655 16:27:42 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:14:22.655 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:14:22.655 [0/1] Installing files. 00:14:22.919 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:14:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:14:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:14:22.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:14:22.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:14:22.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:14:22.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:14:22.925 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:22.925 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:14:23.496 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:14:23.496 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:14:23.496 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.496 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:14:23.496 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.496 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:14:23.500 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:14:23.500 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:14:23.500 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:14:23.500 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:14:23.500 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:14:23.500 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:14:23.500 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:14:23.501 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:14:23.501 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:14:23.501 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:14:23.501 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:14:23.501 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:14:23.501 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:14:23.501 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:14:23.501 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:14:23.501 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:14:23.501 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:14:23.501 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:14:23.501 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:14:23.501 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:14:23.501 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:14:23.501 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:14:23.501 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:14:23.501 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:14:23.501 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:14:23.501 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:14:23.501 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:14:23.501 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:14:23.501 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:14:23.501 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:14:23.501 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:14:23.501 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:14:23.501 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:14:23.501 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:14:23.501 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:14:23.501 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:14:23.501 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:14:23.501 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:14:23.501 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:14:23.501 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:14:23.501 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:14:23.501 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:14:23.501 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:14:23.501 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:14:23.501 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:14:23.501 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:14:23.501 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:14:23.501 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:14:23.501 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:14:23.501 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:14:23.501 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:14:23.501 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:14:23.501 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:14:23.501 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:14:23.501 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:14:23.501 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:14:23.501 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:14:23.501 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:14:23.501 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:14:23.501 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:14:23.501 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:14:23.501 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:14:23.501 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:14:23.501 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:14:23.501 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:14:23.501 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:14:23.501 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:14:23.501 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:14:23.501 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:14:23.501 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:14:23.501 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:14:23.501 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:14:23.501 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:14:23.501 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:14:23.501 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:14:23.501 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:14:23.501 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:14:23.501 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:14:23.501 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:14:23.501 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:14:23.501 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:14:23.502 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:14:23.502 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:14:23.502 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:14:23.502 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:14:23.502 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:14:23.502 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:14:23.502 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:14:23.502 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:14:23.502 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:14:23.502 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:14:23.502 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:14:23.502 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:14:23.502 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:14:23.502 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:14:23.502 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:14:23.502 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:14:23.502 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:14:23.502 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:14:23.502 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:14:23.502 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:14:23.502 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:14:23.502 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:14:23.502 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:14:23.502 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:14:23.502 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:14:23.502 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:14:23.502 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:14:23.502 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:14:23.502 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:14:23.502 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:14:23.502 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:14:23.502 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:14:23.502 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:14:23.502 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:14:23.502 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:14:23.502 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:14:23.502 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:14:23.502 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:14:23.502 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:14:23.502 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:14:23.502 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:14:23.502 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:14:23.502 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:14:23.502 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:14:23.502 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:14:23.502 16:27:43 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:14:23.502 16:27:43 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:14:23.502 16:27:43 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:14:23.502 16:27:43 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:23.502 00:14:23.502 real 1m21.256s 00:14:23.502 user 14m23.987s 00:14:23.502 sys 1m48.566s 00:14:23.502 16:27:43 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:14:23.502 16:27:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:14:23.502 ************************************ 00:14:23.502 END TEST build_native_dpdk 00:14:23.502 ************************************ 00:14:23.502 16:27:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:23.502 16:27:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:23.502 16:27:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:14:23.761 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:14:23.761 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:23.761 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:23.761 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:24.020 Using 'verbs' RDMA provider 00:14:34.563 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:14:42.671 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:14:42.929 Creating mk/config.mk...done. 00:14:42.929 Creating mk/cc.flags.mk...done. 00:14:42.929 Type 'make' to build. 00:14:42.929 16:28:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:14:42.929 16:28:02 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:14:42.929 16:28:02 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:14:42.929 16:28:02 -- common/autotest_common.sh@10 -- $ set +x 00:14:42.929 ************************************ 00:14:42.929 START TEST make 00:14:42.929 ************************************ 00:14:42.929 16:28:02 make -- common/autotest_common.sh@1121 -- $ make -j48 00:14:43.187 make[1]: Nothing to be done for 'all'. 00:14:45.103 The Meson build system 00:14:45.103 Version: 1.3.1 00:14:45.103 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:14:45.103 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:14:45.103 Build type: native build 00:14:45.103 Project name: libvfio-user 00:14:45.103 Project version: 0.0.1 00:14:45.103 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:14:45.103 C linker for the host machine: gcc ld.bfd 2.39-16 00:14:45.103 Host machine cpu family: x86_64 00:14:45.103 Host machine cpu: x86_64 00:14:45.103 Run-time dependency threads found: YES 00:14:45.103 Library dl found: YES 00:14:45.103 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:14:45.103 Run-time dependency json-c found: YES 0.17 00:14:45.103 Run-time dependency cmocka found: YES 1.1.7 00:14:45.103 Program pytest-3 found: NO 00:14:45.103 Program flake8 found: NO 00:14:45.103 Program misspell-fixer found: NO 00:14:45.103 Program restructuredtext-lint found: NO 00:14:45.103 Program valgrind found: YES (/usr/bin/valgrind) 00:14:45.103 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:45.103 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:45.103 Compiler for C supports arguments -Wwrite-strings: YES 00:14:45.103 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:14:45.103 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:14:45.103 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:14:45.103 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:14:45.103 Build targets in project: 8 00:14:45.103 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:14:45.103 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:14:45.103 00:14:45.103 libvfio-user 0.0.1 00:14:45.103 00:14:45.103 User defined options 00:14:45.103 buildtype : debug 00:14:45.103 default_library: shared 00:14:45.103 libdir : /usr/local/lib 00:14:45.103 00:14:45.103 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:45.678 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:14:45.678 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:14:45.678 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:14:45.678 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:14:45.678 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:14:45.678 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:14:45.678 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:14:45.678 [7/37] Compiling C object samples/null.p/null.c.o 00:14:45.678 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:14:45.678 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:14:45.678 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:14:45.678 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:14:45.678 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:14:45.937 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:14:45.937 [14/37] Compiling C object samples/client.p/client.c.o 00:14:45.937 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:14:45.937 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:14:45.937 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:14:45.937 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:14:45.937 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:14:45.937 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:14:45.937 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:14:45.937 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:14:45.937 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:14:45.937 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:14:45.937 [25/37] Compiling C object samples/server.p/server.c.o 00:14:45.937 [26/37] Linking target samples/client 00:14:45.937 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:14:46.203 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:14:46.203 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:14:46.203 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:14:46.203 [31/37] Linking target test/unit_tests 00:14:46.464 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:14:46.464 [33/37] Linking target samples/server 00:14:46.464 [34/37] Linking target samples/null 00:14:46.464 [35/37] Linking target samples/lspci 00:14:46.464 [36/37] Linking target samples/gpio-pci-idio-16 00:14:46.464 [37/37] Linking target samples/shadow_ioeventfd_server 00:14:46.464 INFO: autodetecting backend as ninja 00:14:46.464 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:14:46.464 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:14:47.409 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:14:47.409 ninja: no work to do. 00:14:59.603 CC lib/ut/ut.o 00:14:59.603 CC lib/log/log.o 00:14:59.603 CC lib/log/log_flags.o 00:14:59.603 CC lib/log/log_deprecated.o 00:14:59.603 CC lib/ut_mock/mock.o 00:14:59.603 LIB libspdk_log.a 00:14:59.603 LIB libspdk_ut_mock.a 00:14:59.603 LIB libspdk_ut.a 00:14:59.603 SO libspdk_ut_mock.so.6.0 00:14:59.603 SO libspdk_ut.so.2.0 00:14:59.603 SO libspdk_log.so.7.0 00:14:59.603 SYMLINK libspdk_ut_mock.so 00:14:59.603 SYMLINK libspdk_ut.so 00:14:59.603 SYMLINK libspdk_log.so 00:14:59.603 CC lib/dma/dma.o 00:14:59.603 CXX lib/trace_parser/trace.o 00:14:59.603 CC lib/ioat/ioat.o 00:14:59.603 CC lib/util/base64.o 00:14:59.603 CC lib/util/bit_array.o 00:14:59.603 CC lib/util/cpuset.o 00:14:59.603 CC lib/util/crc16.o 00:14:59.603 CC lib/util/crc32.o 00:14:59.603 CC lib/util/crc32c.o 00:14:59.603 CC lib/util/crc32_ieee.o 00:14:59.603 CC lib/util/crc64.o 00:14:59.603 CC lib/util/dif.o 00:14:59.603 CC lib/util/fd.o 00:14:59.603 CC lib/util/file.o 00:14:59.603 CC lib/util/hexlify.o 00:14:59.603 CC lib/util/iov.o 00:14:59.603 CC lib/util/math.o 00:14:59.603 CC lib/util/pipe.o 00:14:59.603 CC lib/util/strerror_tls.o 00:14:59.603 CC lib/util/string.o 00:14:59.603 CC lib/util/uuid.o 00:14:59.603 CC lib/util/fd_group.o 00:14:59.603 CC lib/util/xor.o 00:14:59.603 CC lib/util/zipf.o 00:14:59.603 CC lib/vfio_user/host/vfio_user_pci.o 00:14:59.603 CC lib/vfio_user/host/vfio_user.o 00:14:59.603 LIB libspdk_dma.a 00:14:59.603 SO libspdk_dma.so.4.0 00:14:59.603 SYMLINK libspdk_dma.so 00:14:59.603 LIB libspdk_ioat.a 00:14:59.603 SO libspdk_ioat.so.7.0 00:14:59.603 LIB libspdk_vfio_user.a 00:14:59.603 SYMLINK libspdk_ioat.so 00:14:59.603 SO libspdk_vfio_user.so.5.0 00:14:59.603 SYMLINK libspdk_vfio_user.so 00:14:59.603 LIB libspdk_util.a 00:14:59.603 SO libspdk_util.so.9.0 00:14:59.861 SYMLINK libspdk_util.so 00:14:59.861 LIB libspdk_trace_parser.a 00:14:59.861 SO libspdk_trace_parser.so.5.0 00:14:59.861 CC lib/json/json_parse.o 00:14:59.861 CC lib/idxd/idxd.o 00:14:59.861 CC lib/env_dpdk/env.o 00:14:59.861 CC lib/json/json_util.o 00:14:59.861 CC lib/idxd/idxd_user.o 00:14:59.861 CC lib/env_dpdk/memory.o 00:14:59.861 CC lib/conf/conf.o 00:14:59.861 CC lib/json/json_write.o 00:14:59.861 CC lib/vmd/vmd.o 00:14:59.861 CC lib/idxd/idxd_kernel.o 00:14:59.861 CC lib/rdma/common.o 00:14:59.861 CC lib/env_dpdk/pci.o 00:14:59.861 CC lib/vmd/led.o 00:14:59.861 CC lib/env_dpdk/init.o 00:14:59.861 CC lib/rdma/rdma_verbs.o 00:14:59.861 CC lib/env_dpdk/threads.o 00:14:59.861 CC lib/env_dpdk/pci_ioat.o 00:14:59.861 CC lib/env_dpdk/pci_virtio.o 00:14:59.861 CC lib/env_dpdk/pci_vmd.o 00:14:59.861 CC lib/env_dpdk/pci_idxd.o 00:14:59.861 CC lib/env_dpdk/pci_event.o 00:14:59.861 CC lib/env_dpdk/sigbus_handler.o 00:14:59.861 CC lib/env_dpdk/pci_dpdk.o 00:14:59.861 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:59.861 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:00.119 SYMLINK libspdk_trace_parser.so 00:15:00.119 LIB libspdk_conf.a 00:15:00.119 SO libspdk_conf.so.6.0 00:15:00.377 LIB libspdk_rdma.a 00:15:00.377 SYMLINK libspdk_conf.so 00:15:00.377 SO libspdk_rdma.so.6.0 00:15:00.377 LIB libspdk_json.a 00:15:00.377 SO libspdk_json.so.6.0 00:15:00.377 SYMLINK libspdk_rdma.so 00:15:00.377 SYMLINK libspdk_json.so 00:15:00.635 LIB libspdk_idxd.a 00:15:00.635 CC lib/jsonrpc/jsonrpc_server.o 00:15:00.635 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:00.635 CC lib/jsonrpc/jsonrpc_client.o 00:15:00.635 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:00.635 SO libspdk_idxd.so.12.0 00:15:00.635 SYMLINK libspdk_idxd.so 00:15:00.635 LIB libspdk_vmd.a 00:15:00.635 SO libspdk_vmd.so.6.0 00:15:00.635 SYMLINK libspdk_vmd.so 00:15:00.893 LIB libspdk_jsonrpc.a 00:15:00.893 SO libspdk_jsonrpc.so.6.0 00:15:00.893 SYMLINK libspdk_jsonrpc.so 00:15:01.151 CC lib/rpc/rpc.o 00:15:01.409 LIB libspdk_rpc.a 00:15:01.409 SO libspdk_rpc.so.6.0 00:15:01.409 SYMLINK libspdk_rpc.so 00:15:01.667 CC lib/trace/trace.o 00:15:01.667 CC lib/notify/notify.o 00:15:01.667 CC lib/trace/trace_flags.o 00:15:01.667 CC lib/notify/notify_rpc.o 00:15:01.667 CC lib/trace/trace_rpc.o 00:15:01.667 CC lib/keyring/keyring.o 00:15:01.667 CC lib/keyring/keyring_rpc.o 00:15:01.667 LIB libspdk_notify.a 00:15:01.667 SO libspdk_notify.so.6.0 00:15:01.667 LIB libspdk_keyring.a 00:15:01.667 SYMLINK libspdk_notify.so 00:15:01.925 LIB libspdk_trace.a 00:15:01.925 SO libspdk_keyring.so.1.0 00:15:01.925 SO libspdk_trace.so.10.0 00:15:01.925 SYMLINK libspdk_keyring.so 00:15:01.925 SYMLINK libspdk_trace.so 00:15:01.925 LIB libspdk_env_dpdk.a 00:15:01.925 SO libspdk_env_dpdk.so.14.0 00:15:02.203 CC lib/sock/sock.o 00:15:02.203 CC lib/sock/sock_rpc.o 00:15:02.203 CC lib/thread/thread.o 00:15:02.203 CC lib/thread/iobuf.o 00:15:02.204 SYMLINK libspdk_env_dpdk.so 00:15:02.462 LIB libspdk_sock.a 00:15:02.462 SO libspdk_sock.so.9.0 00:15:02.462 SYMLINK libspdk_sock.so 00:15:02.720 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:02.720 CC lib/nvme/nvme_ctrlr.o 00:15:02.720 CC lib/nvme/nvme_fabric.o 00:15:02.720 CC lib/nvme/nvme_ns_cmd.o 00:15:02.720 CC lib/nvme/nvme_ns.o 00:15:02.720 CC lib/nvme/nvme_pcie_common.o 00:15:02.720 CC lib/nvme/nvme_pcie.o 00:15:02.720 CC lib/nvme/nvme_qpair.o 00:15:02.720 CC lib/nvme/nvme.o 00:15:02.720 CC lib/nvme/nvme_quirks.o 00:15:02.720 CC lib/nvme/nvme_transport.o 00:15:02.720 CC lib/nvme/nvme_discovery.o 00:15:02.720 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:02.720 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:02.720 CC lib/nvme/nvme_tcp.o 00:15:02.720 CC lib/nvme/nvme_opal.o 00:15:02.720 CC lib/nvme/nvme_io_msg.o 00:15:02.720 CC lib/nvme/nvme_poll_group.o 00:15:02.720 CC lib/nvme/nvme_zns.o 00:15:02.720 CC lib/nvme/nvme_stubs.o 00:15:02.720 CC lib/nvme/nvme_auth.o 00:15:02.720 CC lib/nvme/nvme_cuse.o 00:15:02.720 CC lib/nvme/nvme_vfio_user.o 00:15:02.720 CC lib/nvme/nvme_rdma.o 00:15:03.655 LIB libspdk_thread.a 00:15:03.655 SO libspdk_thread.so.10.0 00:15:03.655 SYMLINK libspdk_thread.so 00:15:03.914 CC lib/accel/accel.o 00:15:03.914 CC lib/blob/blobstore.o 00:15:03.914 CC lib/virtio/virtio.o 00:15:03.914 CC lib/accel/accel_rpc.o 00:15:03.914 CC lib/virtio/virtio_vhost_user.o 00:15:03.914 CC lib/blob/request.o 00:15:03.914 CC lib/accel/accel_sw.o 00:15:03.914 CC lib/blob/zeroes.o 00:15:03.914 CC lib/virtio/virtio_vfio_user.o 00:15:03.914 CC lib/init/json_config.o 00:15:03.914 CC lib/vfu_tgt/tgt_endpoint.o 00:15:03.914 CC lib/virtio/virtio_pci.o 00:15:03.914 CC lib/blob/blob_bs_dev.o 00:15:03.914 CC lib/init/subsystem.o 00:15:03.914 CC lib/vfu_tgt/tgt_rpc.o 00:15:03.914 CC lib/init/subsystem_rpc.o 00:15:03.914 CC lib/init/rpc.o 00:15:04.172 LIB libspdk_init.a 00:15:04.172 SO libspdk_init.so.5.0 00:15:04.172 LIB libspdk_vfu_tgt.a 00:15:04.172 LIB libspdk_virtio.a 00:15:04.172 SYMLINK libspdk_init.so 00:15:04.172 SO libspdk_vfu_tgt.so.3.0 00:15:04.172 SO libspdk_virtio.so.7.0 00:15:04.431 SYMLINK libspdk_vfu_tgt.so 00:15:04.431 SYMLINK libspdk_virtio.so 00:15:04.431 CC lib/event/app.o 00:15:04.431 CC lib/event/reactor.o 00:15:04.431 CC lib/event/log_rpc.o 00:15:04.431 CC lib/event/app_rpc.o 00:15:04.431 CC lib/event/scheduler_static.o 00:15:04.688 LIB libspdk_event.a 00:15:04.947 SO libspdk_event.so.13.0 00:15:04.947 SYMLINK libspdk_event.so 00:15:04.947 LIB libspdk_accel.a 00:15:04.947 SO libspdk_accel.so.15.0 00:15:04.947 SYMLINK libspdk_accel.so 00:15:04.947 LIB libspdk_nvme.a 00:15:05.205 SO libspdk_nvme.so.13.0 00:15:05.205 CC lib/bdev/bdev.o 00:15:05.205 CC lib/bdev/bdev_rpc.o 00:15:05.205 CC lib/bdev/bdev_zone.o 00:15:05.205 CC lib/bdev/part.o 00:15:05.205 CC lib/bdev/scsi_nvme.o 00:15:05.463 SYMLINK libspdk_nvme.so 00:15:06.836 LIB libspdk_blob.a 00:15:06.836 SO libspdk_blob.so.11.0 00:15:07.094 SYMLINK libspdk_blob.so 00:15:07.094 CC lib/lvol/lvol.o 00:15:07.094 CC lib/blobfs/blobfs.o 00:15:07.094 CC lib/blobfs/tree.o 00:15:07.660 LIB libspdk_bdev.a 00:15:07.660 SO libspdk_bdev.so.15.0 00:15:07.922 SYMLINK libspdk_bdev.so 00:15:07.922 LIB libspdk_blobfs.a 00:15:07.922 SO libspdk_blobfs.so.10.0 00:15:07.922 CC lib/nbd/nbd.o 00:15:07.922 CC lib/scsi/dev.o 00:15:07.922 CC lib/ublk/ublk.o 00:15:07.922 CC lib/nvmf/ctrlr.o 00:15:07.922 CC lib/scsi/lun.o 00:15:07.922 CC lib/nbd/nbd_rpc.o 00:15:07.922 CC lib/nvmf/ctrlr_discovery.o 00:15:07.922 CC lib/ublk/ublk_rpc.o 00:15:07.922 CC lib/scsi/port.o 00:15:07.922 CC lib/ftl/ftl_core.o 00:15:07.922 CC lib/nvmf/ctrlr_bdev.o 00:15:07.922 CC lib/ftl/ftl_init.o 00:15:07.922 CC lib/scsi/scsi.o 00:15:07.922 CC lib/nvmf/subsystem.o 00:15:07.922 CC lib/scsi/scsi_bdev.o 00:15:07.922 CC lib/scsi/scsi_pr.o 00:15:07.922 CC lib/ftl/ftl_layout.o 00:15:07.922 CC lib/nvmf/nvmf.o 00:15:07.922 CC lib/scsi/scsi_rpc.o 00:15:07.922 CC lib/ftl/ftl_debug.o 00:15:07.922 CC lib/nvmf/nvmf_rpc.o 00:15:07.922 CC lib/nvmf/transport.o 00:15:07.922 CC lib/scsi/task.o 00:15:07.922 CC lib/ftl/ftl_io.o 00:15:07.922 CC lib/nvmf/tcp.o 00:15:07.922 CC lib/ftl/ftl_sb.o 00:15:07.922 CC lib/ftl/ftl_l2p.o 00:15:07.922 CC lib/nvmf/stubs.o 00:15:07.922 CC lib/ftl/ftl_l2p_flat.o 00:15:07.922 CC lib/nvmf/vfio_user.o 00:15:07.922 CC lib/nvmf/mdns_server.o 00:15:07.922 CC lib/ftl/ftl_nv_cache.o 00:15:07.922 CC lib/ftl/ftl_band.o 00:15:07.922 CC lib/nvmf/rdma.o 00:15:07.922 CC lib/nvmf/auth.o 00:15:07.922 CC lib/ftl/ftl_band_ops.o 00:15:07.922 CC lib/ftl/ftl_writer.o 00:15:07.922 CC lib/ftl/ftl_rq.o 00:15:07.922 CC lib/ftl/ftl_reloc.o 00:15:07.922 CC lib/ftl/ftl_l2p_cache.o 00:15:07.922 CC lib/ftl/ftl_p2l.o 00:15:07.922 CC lib/ftl/mngt/ftl_mngt.o 00:15:07.922 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:07.922 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:07.922 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:07.922 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:08.181 LIB libspdk_lvol.a 00:15:08.181 SYMLINK libspdk_blobfs.so 00:15:08.181 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:08.181 SO libspdk_lvol.so.10.0 00:15:08.181 SYMLINK libspdk_lvol.so 00:15:08.181 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:08.447 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:08.447 CC lib/ftl/utils/ftl_conf.o 00:15:08.447 CC lib/ftl/utils/ftl_md.o 00:15:08.447 CC lib/ftl/utils/ftl_mempool.o 00:15:08.447 CC lib/ftl/utils/ftl_bitmap.o 00:15:08.447 CC lib/ftl/utils/ftl_property.o 00:15:08.447 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:08.447 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:08.447 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:08.447 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:08.447 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:08.447 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:08.447 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:15:08.708 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:08.708 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:08.708 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:08.708 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:08.708 CC lib/ftl/base/ftl_base_dev.o 00:15:08.708 CC lib/ftl/base/ftl_base_bdev.o 00:15:08.708 CC lib/ftl/ftl_trace.o 00:15:08.708 LIB libspdk_nbd.a 00:15:08.967 SO libspdk_nbd.so.7.0 00:15:08.967 SYMLINK libspdk_nbd.so 00:15:08.967 LIB libspdk_scsi.a 00:15:08.967 SO libspdk_scsi.so.9.0 00:15:08.967 LIB libspdk_ublk.a 00:15:08.967 SYMLINK libspdk_scsi.so 00:15:09.225 SO libspdk_ublk.so.3.0 00:15:09.225 SYMLINK libspdk_ublk.so 00:15:09.225 CC lib/vhost/vhost.o 00:15:09.225 CC lib/iscsi/conn.o 00:15:09.225 CC lib/vhost/vhost_rpc.o 00:15:09.225 CC lib/iscsi/init_grp.o 00:15:09.225 CC lib/vhost/vhost_scsi.o 00:15:09.225 CC lib/iscsi/iscsi.o 00:15:09.225 CC lib/vhost/vhost_blk.o 00:15:09.225 CC lib/iscsi/md5.o 00:15:09.225 CC lib/vhost/rte_vhost_user.o 00:15:09.225 CC lib/iscsi/param.o 00:15:09.225 CC lib/iscsi/portal_grp.o 00:15:09.225 CC lib/iscsi/tgt_node.o 00:15:09.225 CC lib/iscsi/iscsi_subsystem.o 00:15:09.225 CC lib/iscsi/iscsi_rpc.o 00:15:09.225 CC lib/iscsi/task.o 00:15:09.484 LIB libspdk_ftl.a 00:15:09.484 SO libspdk_ftl.so.9.0 00:15:10.050 SYMLINK libspdk_ftl.so 00:15:10.617 LIB libspdk_vhost.a 00:15:10.617 SO libspdk_vhost.so.8.0 00:15:10.617 SYMLINK libspdk_vhost.so 00:15:10.617 LIB libspdk_nvmf.a 00:15:10.617 LIB libspdk_iscsi.a 00:15:10.617 SO libspdk_nvmf.so.18.0 00:15:10.617 SO libspdk_iscsi.so.8.0 00:15:10.875 SYMLINK libspdk_iscsi.so 00:15:10.875 SYMLINK libspdk_nvmf.so 00:15:11.133 CC module/env_dpdk/env_dpdk_rpc.o 00:15:11.133 CC module/vfu_device/vfu_virtio.o 00:15:11.133 CC module/vfu_device/vfu_virtio_blk.o 00:15:11.134 CC module/vfu_device/vfu_virtio_scsi.o 00:15:11.134 CC module/vfu_device/vfu_virtio_rpc.o 00:15:11.134 CC module/blob/bdev/blob_bdev.o 00:15:11.134 CC module/accel/dsa/accel_dsa.o 00:15:11.134 CC module/accel/dsa/accel_dsa_rpc.o 00:15:11.134 CC module/accel/error/accel_error.o 00:15:11.134 CC module/accel/error/accel_error_rpc.o 00:15:11.134 CC module/keyring/linux/keyring.o 00:15:11.134 CC module/sock/posix/posix.o 00:15:11.134 CC module/keyring/file/keyring.o 00:15:11.134 CC module/accel/ioat/accel_ioat.o 00:15:11.134 CC module/keyring/linux/keyring_rpc.o 00:15:11.134 CC module/scheduler/gscheduler/gscheduler.o 00:15:11.134 CC module/keyring/file/keyring_rpc.o 00:15:11.134 CC module/accel/ioat/accel_ioat_rpc.o 00:15:11.134 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:11.134 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:11.134 CC module/accel/iaa/accel_iaa.o 00:15:11.134 CC module/accel/iaa/accel_iaa_rpc.o 00:15:11.392 LIB libspdk_env_dpdk_rpc.a 00:15:11.392 SO libspdk_env_dpdk_rpc.so.6.0 00:15:11.392 SYMLINK libspdk_env_dpdk_rpc.so 00:15:11.392 LIB libspdk_keyring_linux.a 00:15:11.392 LIB libspdk_scheduler_dpdk_governor.a 00:15:11.392 LIB libspdk_keyring_file.a 00:15:11.392 LIB libspdk_scheduler_gscheduler.a 00:15:11.392 SO libspdk_scheduler_dpdk_governor.so.4.0 00:15:11.392 SO libspdk_scheduler_gscheduler.so.4.0 00:15:11.392 SO libspdk_keyring_linux.so.1.0 00:15:11.392 SO libspdk_keyring_file.so.1.0 00:15:11.392 LIB libspdk_accel_error.a 00:15:11.392 LIB libspdk_scheduler_dynamic.a 00:15:11.392 LIB libspdk_accel_ioat.a 00:15:11.392 SO libspdk_accel_error.so.2.0 00:15:11.392 LIB libspdk_accel_iaa.a 00:15:11.392 SO libspdk_scheduler_dynamic.so.4.0 00:15:11.392 SO libspdk_accel_ioat.so.6.0 00:15:11.392 SYMLINK libspdk_scheduler_gscheduler.so 00:15:11.392 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:11.392 SYMLINK libspdk_keyring_file.so 00:15:11.392 SYMLINK libspdk_keyring_linux.so 00:15:11.392 SO libspdk_accel_iaa.so.3.0 00:15:11.392 LIB libspdk_accel_dsa.a 00:15:11.651 SYMLINK libspdk_accel_error.so 00:15:11.651 SYMLINK libspdk_scheduler_dynamic.so 00:15:11.651 LIB libspdk_blob_bdev.a 00:15:11.651 SO libspdk_accel_dsa.so.5.0 00:15:11.651 SYMLINK libspdk_accel_ioat.so 00:15:11.651 SO libspdk_blob_bdev.so.11.0 00:15:11.651 SYMLINK libspdk_accel_iaa.so 00:15:11.651 SYMLINK libspdk_accel_dsa.so 00:15:11.651 SYMLINK libspdk_blob_bdev.so 00:15:11.910 LIB libspdk_vfu_device.a 00:15:11.910 SO libspdk_vfu_device.so.3.0 00:15:11.910 CC module/bdev/error/vbdev_error.o 00:15:11.910 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:11.910 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:11.910 CC module/bdev/aio/bdev_aio.o 00:15:11.910 CC module/bdev/error/vbdev_error_rpc.o 00:15:11.910 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:11.910 CC module/bdev/delay/vbdev_delay.o 00:15:11.910 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:11.910 CC module/bdev/aio/bdev_aio_rpc.o 00:15:11.910 CC module/bdev/iscsi/bdev_iscsi.o 00:15:11.910 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:11.910 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:11.910 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:11.910 CC module/bdev/gpt/gpt.o 00:15:11.910 CC module/bdev/malloc/bdev_malloc.o 00:15:11.910 CC module/bdev/split/vbdev_split.o 00:15:11.910 CC module/bdev/ftl/bdev_ftl.o 00:15:11.910 CC module/bdev/nvme/bdev_nvme.o 00:15:11.910 CC module/bdev/raid/bdev_raid.o 00:15:11.910 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:11.910 CC module/bdev/lvol/vbdev_lvol.o 00:15:11.910 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:11.910 CC module/bdev/split/vbdev_split_rpc.o 00:15:11.910 CC module/bdev/raid/bdev_raid_rpc.o 00:15:11.910 CC module/bdev/gpt/vbdev_gpt.o 00:15:11.910 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:11.910 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:11.910 CC module/bdev/passthru/vbdev_passthru.o 00:15:11.910 CC module/bdev/nvme/nvme_rpc.o 00:15:11.910 CC module/blobfs/bdev/blobfs_bdev.o 00:15:11.910 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:11.910 CC module/bdev/raid/bdev_raid_sb.o 00:15:11.910 CC module/bdev/raid/raid0.o 00:15:11.910 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:11.910 CC module/bdev/null/bdev_null.o 00:15:11.910 CC module/bdev/nvme/bdev_mdns_client.o 00:15:11.910 CC module/bdev/raid/raid1.o 00:15:11.910 CC module/bdev/null/bdev_null_rpc.o 00:15:11.910 CC module/bdev/nvme/vbdev_opal.o 00:15:11.910 CC module/bdev/raid/concat.o 00:15:11.910 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:11.910 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:11.910 SYMLINK libspdk_vfu_device.so 00:15:12.169 LIB libspdk_sock_posix.a 00:15:12.169 SO libspdk_sock_posix.so.6.0 00:15:12.169 LIB libspdk_bdev_null.a 00:15:12.169 LIB libspdk_blobfs_bdev.a 00:15:12.169 SO libspdk_bdev_null.so.6.0 00:15:12.169 SO libspdk_blobfs_bdev.so.6.0 00:15:12.169 SYMLINK libspdk_sock_posix.so 00:15:12.428 LIB libspdk_bdev_split.a 00:15:12.428 LIB libspdk_bdev_zone_block.a 00:15:12.428 LIB libspdk_bdev_error.a 00:15:12.428 SYMLINK libspdk_bdev_null.so 00:15:12.428 SO libspdk_bdev_split.so.6.0 00:15:12.428 SO libspdk_bdev_zone_block.so.6.0 00:15:12.428 SYMLINK libspdk_blobfs_bdev.so 00:15:12.428 SO libspdk_bdev_error.so.6.0 00:15:12.428 LIB libspdk_bdev_ftl.a 00:15:12.428 LIB libspdk_bdev_aio.a 00:15:12.428 LIB libspdk_bdev_gpt.a 00:15:12.428 SYMLINK libspdk_bdev_split.so 00:15:12.428 SYMLINK libspdk_bdev_zone_block.so 00:15:12.428 SO libspdk_bdev_ftl.so.6.0 00:15:12.428 SYMLINK libspdk_bdev_error.so 00:15:12.428 SO libspdk_bdev_aio.so.6.0 00:15:12.428 SO libspdk_bdev_gpt.so.6.0 00:15:12.428 LIB libspdk_bdev_passthru.a 00:15:12.428 LIB libspdk_bdev_malloc.a 00:15:12.428 SO libspdk_bdev_passthru.so.6.0 00:15:12.428 SYMLINK libspdk_bdev_ftl.so 00:15:12.428 SO libspdk_bdev_malloc.so.6.0 00:15:12.428 LIB libspdk_bdev_iscsi.a 00:15:12.428 SYMLINK libspdk_bdev_aio.so 00:15:12.428 SYMLINK libspdk_bdev_gpt.so 00:15:12.428 LIB libspdk_bdev_delay.a 00:15:12.428 SO libspdk_bdev_iscsi.so.6.0 00:15:12.428 SYMLINK libspdk_bdev_passthru.so 00:15:12.428 SO libspdk_bdev_delay.so.6.0 00:15:12.428 SYMLINK libspdk_bdev_malloc.so 00:15:12.686 SYMLINK libspdk_bdev_iscsi.so 00:15:12.686 SYMLINK libspdk_bdev_delay.so 00:15:12.686 LIB libspdk_bdev_lvol.a 00:15:12.686 SO libspdk_bdev_lvol.so.6.0 00:15:12.686 LIB libspdk_bdev_virtio.a 00:15:12.686 SYMLINK libspdk_bdev_lvol.so 00:15:12.686 SO libspdk_bdev_virtio.so.6.0 00:15:12.686 SYMLINK libspdk_bdev_virtio.so 00:15:12.945 LIB libspdk_bdev_raid.a 00:15:13.203 SO libspdk_bdev_raid.so.6.0 00:15:13.203 SYMLINK libspdk_bdev_raid.so 00:15:14.142 LIB libspdk_bdev_nvme.a 00:15:14.400 SO libspdk_bdev_nvme.so.7.0 00:15:14.400 SYMLINK libspdk_bdev_nvme.so 00:15:14.658 CC module/event/subsystems/sock/sock.o 00:15:14.658 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:15:14.658 CC module/event/subsystems/keyring/keyring.o 00:15:14.658 CC module/event/subsystems/vmd/vmd.o 00:15:14.658 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:14.658 CC module/event/subsystems/iobuf/iobuf.o 00:15:14.658 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:14.658 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:14.658 CC module/event/subsystems/scheduler/scheduler.o 00:15:14.916 LIB libspdk_event_keyring.a 00:15:14.917 LIB libspdk_event_vhost_blk.a 00:15:14.917 LIB libspdk_event_vfu_tgt.a 00:15:14.917 LIB libspdk_event_vmd.a 00:15:14.917 LIB libspdk_event_scheduler.a 00:15:14.917 LIB libspdk_event_sock.a 00:15:14.917 SO libspdk_event_keyring.so.1.0 00:15:14.917 LIB libspdk_event_iobuf.a 00:15:14.917 SO libspdk_event_vhost_blk.so.3.0 00:15:14.917 SO libspdk_event_vfu_tgt.so.3.0 00:15:14.917 SO libspdk_event_sock.so.5.0 00:15:14.917 SO libspdk_event_scheduler.so.4.0 00:15:14.917 SO libspdk_event_vmd.so.6.0 00:15:14.917 SO libspdk_event_iobuf.so.3.0 00:15:14.917 SYMLINK libspdk_event_keyring.so 00:15:14.917 SYMLINK libspdk_event_vhost_blk.so 00:15:14.917 SYMLINK libspdk_event_vfu_tgt.so 00:15:14.917 SYMLINK libspdk_event_sock.so 00:15:14.917 SYMLINK libspdk_event_scheduler.so 00:15:14.917 SYMLINK libspdk_event_vmd.so 00:15:14.917 SYMLINK libspdk_event_iobuf.so 00:15:15.174 CC module/event/subsystems/accel/accel.o 00:15:15.432 LIB libspdk_event_accel.a 00:15:15.432 SO libspdk_event_accel.so.6.0 00:15:15.432 SYMLINK libspdk_event_accel.so 00:15:15.690 CC module/event/subsystems/bdev/bdev.o 00:15:15.690 LIB libspdk_event_bdev.a 00:15:15.690 SO libspdk_event_bdev.so.6.0 00:15:15.948 SYMLINK libspdk_event_bdev.so 00:15:15.948 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:15.948 CC module/event/subsystems/ublk/ublk.o 00:15:15.948 CC module/event/subsystems/scsi/scsi.o 00:15:15.948 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:15.948 CC module/event/subsystems/nbd/nbd.o 00:15:16.206 LIB libspdk_event_nbd.a 00:15:16.206 LIB libspdk_event_ublk.a 00:15:16.206 LIB libspdk_event_scsi.a 00:15:16.206 SO libspdk_event_nbd.so.6.0 00:15:16.206 SO libspdk_event_ublk.so.3.0 00:15:16.206 SO libspdk_event_scsi.so.6.0 00:15:16.206 SYMLINK libspdk_event_ublk.so 00:15:16.206 SYMLINK libspdk_event_nbd.so 00:15:16.206 SYMLINK libspdk_event_scsi.so 00:15:16.206 LIB libspdk_event_nvmf.a 00:15:16.206 SO libspdk_event_nvmf.so.6.0 00:15:16.206 SYMLINK libspdk_event_nvmf.so 00:15:16.464 CC module/event/subsystems/iscsi/iscsi.o 00:15:16.464 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:16.464 LIB libspdk_event_vhost_scsi.a 00:15:16.464 LIB libspdk_event_iscsi.a 00:15:16.464 SO libspdk_event_vhost_scsi.so.3.0 00:15:16.724 SO libspdk_event_iscsi.so.6.0 00:15:16.724 SYMLINK libspdk_event_vhost_scsi.so 00:15:16.724 SYMLINK libspdk_event_iscsi.so 00:15:16.724 SO libspdk.so.6.0 00:15:16.724 SYMLINK libspdk.so 00:15:16.984 CC app/spdk_lspci/spdk_lspci.o 00:15:16.984 TEST_HEADER include/spdk/accel.h 00:15:16.984 CC test/rpc_client/rpc_client_test.o 00:15:16.984 CXX app/trace/trace.o 00:15:16.984 TEST_HEADER include/spdk/accel_module.h 00:15:16.984 CC app/spdk_nvme_perf/perf.o 00:15:16.984 CC app/spdk_top/spdk_top.o 00:15:16.984 TEST_HEADER include/spdk/assert.h 00:15:16.984 CC app/spdk_nvme_discover/discovery_aer.o 00:15:16.984 CC app/trace_record/trace_record.o 00:15:16.984 TEST_HEADER include/spdk/barrier.h 00:15:16.984 CC app/spdk_nvme_identify/identify.o 00:15:16.984 TEST_HEADER include/spdk/base64.h 00:15:16.984 TEST_HEADER include/spdk/bdev.h 00:15:16.984 TEST_HEADER include/spdk/bdev_module.h 00:15:16.984 TEST_HEADER include/spdk/bdev_zone.h 00:15:16.984 TEST_HEADER include/spdk/bit_array.h 00:15:16.984 TEST_HEADER include/spdk/bit_pool.h 00:15:16.984 TEST_HEADER include/spdk/blob_bdev.h 00:15:16.984 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:16.984 TEST_HEADER include/spdk/blobfs.h 00:15:16.984 TEST_HEADER include/spdk/blob.h 00:15:16.985 TEST_HEADER include/spdk/conf.h 00:15:16.985 TEST_HEADER include/spdk/config.h 00:15:16.985 TEST_HEADER include/spdk/cpuset.h 00:15:16.985 TEST_HEADER include/spdk/crc16.h 00:15:16.985 TEST_HEADER include/spdk/crc32.h 00:15:16.985 CC examples/interrupt_tgt/interrupt_tgt.o 00:15:16.985 TEST_HEADER include/spdk/crc64.h 00:15:16.985 TEST_HEADER include/spdk/dif.h 00:15:16.985 TEST_HEADER include/spdk/dma.h 00:15:16.985 TEST_HEADER include/spdk/endian.h 00:15:16.985 CC app/spdk_dd/spdk_dd.o 00:15:16.985 TEST_HEADER include/spdk/env_dpdk.h 00:15:16.985 TEST_HEADER include/spdk/env.h 00:15:16.985 CC app/nvmf_tgt/nvmf_main.o 00:15:16.985 TEST_HEADER include/spdk/event.h 00:15:16.985 CC app/iscsi_tgt/iscsi_tgt.o 00:15:16.985 TEST_HEADER include/spdk/fd_group.h 00:15:16.985 TEST_HEADER include/spdk/fd.h 00:15:16.985 TEST_HEADER include/spdk/file.h 00:15:16.985 TEST_HEADER include/spdk/ftl.h 00:15:16.985 CC app/vhost/vhost.o 00:15:16.985 TEST_HEADER include/spdk/gpt_spec.h 00:15:16.985 TEST_HEADER include/spdk/hexlify.h 00:15:16.985 TEST_HEADER include/spdk/histogram_data.h 00:15:16.985 TEST_HEADER include/spdk/idxd.h 00:15:16.985 TEST_HEADER include/spdk/idxd_spec.h 00:15:16.985 TEST_HEADER include/spdk/init.h 00:15:16.985 CC examples/ioat/perf/perf.o 00:15:16.985 TEST_HEADER include/spdk/ioat.h 00:15:17.268 TEST_HEADER include/spdk/ioat_spec.h 00:15:17.268 CC examples/ioat/verify/verify.o 00:15:17.268 CC test/event/event_perf/event_perf.o 00:15:17.268 TEST_HEADER include/spdk/iscsi_spec.h 00:15:17.268 CC test/app/histogram_perf/histogram_perf.o 00:15:17.268 CC test/app/jsoncat/jsoncat.o 00:15:17.268 TEST_HEADER include/spdk/json.h 00:15:17.268 CC test/app/stub/stub.o 00:15:17.268 CC examples/sock/hello_world/hello_sock.o 00:15:17.268 TEST_HEADER include/spdk/jsonrpc.h 00:15:17.268 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:17.268 CC examples/util/zipf/zipf.o 00:15:17.268 TEST_HEADER include/spdk/keyring.h 00:15:17.268 CC app/spdk_tgt/spdk_tgt.o 00:15:17.268 CC examples/accel/perf/accel_perf.o 00:15:17.268 CC examples/idxd/perf/perf.o 00:15:17.268 TEST_HEADER include/spdk/keyring_module.h 00:15:17.268 CC test/env/vtophys/vtophys.o 00:15:17.268 CC test/thread/poller_perf/poller_perf.o 00:15:17.268 CC test/event/reactor/reactor.o 00:15:17.268 TEST_HEADER include/spdk/likely.h 00:15:17.268 CC examples/vmd/led/led.o 00:15:17.268 TEST_HEADER include/spdk/log.h 00:15:17.268 CC test/nvme/aer/aer.o 00:15:17.268 TEST_HEADER include/spdk/lvol.h 00:15:17.268 CC examples/vmd/lsvmd/lsvmd.o 00:15:17.268 TEST_HEADER include/spdk/memory.h 00:15:17.268 CC app/fio/nvme/fio_plugin.o 00:15:17.268 TEST_HEADER include/spdk/mmio.h 00:15:17.268 CC examples/nvme/hello_world/hello_world.o 00:15:17.268 TEST_HEADER include/spdk/nbd.h 00:15:17.268 TEST_HEADER include/spdk/notify.h 00:15:17.268 TEST_HEADER include/spdk/nvme.h 00:15:17.268 TEST_HEADER include/spdk/nvme_intel.h 00:15:17.268 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:17.268 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:17.268 TEST_HEADER include/spdk/nvme_spec.h 00:15:17.268 CC examples/blob/cli/blobcli.o 00:15:17.268 CC examples/nvmf/nvmf/nvmf.o 00:15:17.268 TEST_HEADER include/spdk/nvme_zns.h 00:15:17.268 CC examples/bdev/hello_world/hello_bdev.o 00:15:17.268 CC test/accel/dif/dif.o 00:15:17.268 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:17.268 CC examples/blob/hello_world/hello_blob.o 00:15:17.268 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:17.268 CC examples/bdev/bdevperf/bdevperf.o 00:15:17.268 CC test/bdev/bdevio/bdevio.o 00:15:17.268 CC test/app/bdev_svc/bdev_svc.o 00:15:17.268 TEST_HEADER include/spdk/nvmf.h 00:15:17.268 CC test/dma/test_dma/test_dma.o 00:15:17.268 TEST_HEADER include/spdk/nvmf_spec.h 00:15:17.268 TEST_HEADER include/spdk/nvmf_transport.h 00:15:17.268 CC test/blobfs/mkfs/mkfs.o 00:15:17.268 TEST_HEADER include/spdk/opal.h 00:15:17.268 CC examples/thread/thread/thread_ex.o 00:15:17.268 TEST_HEADER include/spdk/opal_spec.h 00:15:17.268 TEST_HEADER include/spdk/pci_ids.h 00:15:17.268 TEST_HEADER include/spdk/pipe.h 00:15:17.268 TEST_HEADER include/spdk/queue.h 00:15:17.268 TEST_HEADER include/spdk/reduce.h 00:15:17.268 TEST_HEADER include/spdk/rpc.h 00:15:17.268 TEST_HEADER include/spdk/scheduler.h 00:15:17.268 TEST_HEADER include/spdk/scsi.h 00:15:17.268 TEST_HEADER include/spdk/scsi_spec.h 00:15:17.268 TEST_HEADER include/spdk/sock.h 00:15:17.268 TEST_HEADER include/spdk/stdinc.h 00:15:17.268 TEST_HEADER include/spdk/string.h 00:15:17.268 TEST_HEADER include/spdk/thread.h 00:15:17.268 TEST_HEADER include/spdk/trace.h 00:15:17.268 TEST_HEADER include/spdk/trace_parser.h 00:15:17.268 LINK spdk_lspci 00:15:17.268 CC test/env/mem_callbacks/mem_callbacks.o 00:15:17.268 TEST_HEADER include/spdk/tree.h 00:15:17.268 TEST_HEADER include/spdk/ublk.h 00:15:17.268 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:17.268 TEST_HEADER include/spdk/util.h 00:15:17.268 CC test/lvol/esnap/esnap.o 00:15:17.268 TEST_HEADER include/spdk/uuid.h 00:15:17.268 TEST_HEADER include/spdk/version.h 00:15:17.268 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:17.268 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:17.268 TEST_HEADER include/spdk/vhost.h 00:15:17.268 TEST_HEADER include/spdk/vmd.h 00:15:17.268 TEST_HEADER include/spdk/xor.h 00:15:17.268 TEST_HEADER include/spdk/zipf.h 00:15:17.268 CXX test/cpp_headers/accel.o 00:15:17.268 LINK rpc_client_test 00:15:17.529 LINK spdk_nvme_discover 00:15:17.529 LINK jsoncat 00:15:17.529 LINK interrupt_tgt 00:15:17.529 LINK event_perf 00:15:17.529 LINK lsvmd 00:15:17.529 LINK histogram_perf 00:15:17.529 LINK poller_perf 00:15:17.529 LINK reactor 00:15:17.529 LINK led 00:15:17.529 LINK vtophys 00:15:17.529 LINK env_dpdk_post_init 00:15:17.529 LINK zipf 00:15:17.529 LINK nvmf_tgt 00:15:17.529 LINK spdk_trace_record 00:15:17.529 LINK vhost 00:15:17.529 LINK stub 00:15:17.529 LINK iscsi_tgt 00:15:17.529 LINK ioat_perf 00:15:17.529 LINK verify 00:15:17.529 LINK spdk_tgt 00:15:17.529 LINK bdev_svc 00:15:17.529 LINK hello_world 00:15:17.529 LINK mkfs 00:15:17.529 LINK hello_sock 00:15:17.790 LINK hello_blob 00:15:17.790 LINK mem_callbacks 00:15:17.790 CXX test/cpp_headers/accel_module.o 00:15:17.790 LINK hello_bdev 00:15:17.790 LINK aer 00:15:17.790 CXX test/cpp_headers/assert.o 00:15:17.790 LINK thread 00:15:17.790 LINK spdk_dd 00:15:17.790 CC test/env/memory/memory_ut.o 00:15:17.790 LINK idxd_perf 00:15:17.790 CXX test/cpp_headers/barrier.o 00:15:17.790 LINK nvmf 00:15:17.790 CC test/event/reactor_perf/reactor_perf.o 00:15:17.790 LINK spdk_trace 00:15:17.790 CXX test/cpp_headers/base64.o 00:15:17.790 CC examples/nvme/reconnect/reconnect.o 00:15:18.058 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:18.058 CC test/nvme/sgl/sgl.o 00:15:18.058 CC test/nvme/reset/reset.o 00:15:18.058 CC test/nvme/e2edp/nvme_dp.o 00:15:18.058 CXX test/cpp_headers/bdev.o 00:15:18.058 CC examples/nvme/arbitration/arbitration.o 00:15:18.058 LINK test_dma 00:15:18.058 CC app/fio/bdev/fio_plugin.o 00:15:18.058 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:18.058 LINK bdevio 00:15:18.058 CC test/nvme/overhead/overhead.o 00:15:18.058 CC test/event/app_repeat/app_repeat.o 00:15:18.058 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:15:18.058 CXX test/cpp_headers/bdev_module.o 00:15:18.058 CC examples/nvme/hotplug/hotplug.o 00:15:18.058 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:18.058 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:15:18.058 CXX test/cpp_headers/bdev_zone.o 00:15:18.058 CXX test/cpp_headers/bit_array.o 00:15:18.058 CXX test/cpp_headers/bit_pool.o 00:15:18.058 LINK dif 00:15:18.058 LINK accel_perf 00:15:18.058 CC test/event/scheduler/scheduler.o 00:15:18.058 CXX test/cpp_headers/blob_bdev.o 00:15:18.058 LINK nvme_fuzz 00:15:18.058 CC examples/nvme/abort/abort.o 00:15:18.058 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:18.058 CC test/env/pci/pci_ut.o 00:15:18.058 LINK reactor_perf 00:15:18.058 LINK blobcli 00:15:18.058 CC test/nvme/err_injection/err_injection.o 00:15:18.058 CXX test/cpp_headers/blobfs_bdev.o 00:15:18.318 LINK spdk_nvme 00:15:18.318 CC test/nvme/startup/startup.o 00:15:18.318 CXX test/cpp_headers/blobfs.o 00:15:18.318 CC test/nvme/reserve/reserve.o 00:15:18.318 CC test/nvme/simple_copy/simple_copy.o 00:15:18.318 CC test/nvme/boot_partition/boot_partition.o 00:15:18.318 CC test/nvme/connect_stress/connect_stress.o 00:15:18.318 CC test/nvme/compliance/nvme_compliance.o 00:15:18.318 LINK app_repeat 00:15:18.318 CXX test/cpp_headers/blob.o 00:15:18.318 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:18.318 CC test/nvme/fused_ordering/fused_ordering.o 00:15:18.318 CXX test/cpp_headers/conf.o 00:15:18.318 CXX test/cpp_headers/config.o 00:15:18.318 LINK cmb_copy 00:15:18.318 CXX test/cpp_headers/cpuset.o 00:15:18.318 CXX test/cpp_headers/crc16.o 00:15:18.318 CXX test/cpp_headers/crc32.o 00:15:18.583 CXX test/cpp_headers/crc64.o 00:15:18.583 CXX test/cpp_headers/dif.o 00:15:18.583 LINK reset 00:15:18.583 CXX test/cpp_headers/dma.o 00:15:18.583 CXX test/cpp_headers/endian.o 00:15:18.583 LINK hotplug 00:15:18.583 CXX test/cpp_headers/env_dpdk.o 00:15:18.583 LINK sgl 00:15:18.583 LINK nvme_dp 00:15:18.583 LINK pmr_persistence 00:15:18.583 CC test/nvme/fdp/fdp.o 00:15:18.583 CXX test/cpp_headers/env.o 00:15:18.583 LINK spdk_nvme_perf 00:15:18.583 CXX test/cpp_headers/event.o 00:15:18.583 LINK reconnect 00:15:18.583 LINK overhead 00:15:18.583 LINK err_injection 00:15:18.583 CC test/nvme/cuse/cuse.o 00:15:18.583 LINK scheduler 00:15:18.583 CXX test/cpp_headers/fd_group.o 00:15:18.583 LINK arbitration 00:15:18.583 CXX test/cpp_headers/fd.o 00:15:18.583 LINK spdk_nvme_identify 00:15:18.583 LINK startup 00:15:18.583 LINK boot_partition 00:15:18.583 CXX test/cpp_headers/file.o 00:15:18.583 LINK connect_stress 00:15:18.851 LINK bdevperf 00:15:18.851 LINK spdk_top 00:15:18.851 LINK reserve 00:15:18.851 CXX test/cpp_headers/ftl.o 00:15:18.851 CXX test/cpp_headers/gpt_spec.o 00:15:18.851 LINK simple_copy 00:15:18.851 LINK doorbell_aers 00:15:18.851 CXX test/cpp_headers/hexlify.o 00:15:18.851 LINK vhost_fuzz 00:15:18.851 CXX test/cpp_headers/histogram_data.o 00:15:18.851 CXX test/cpp_headers/idxd.o 00:15:18.851 CXX test/cpp_headers/idxd_spec.o 00:15:18.851 CXX test/cpp_headers/init.o 00:15:18.851 CXX test/cpp_headers/ioat.o 00:15:18.851 CXX test/cpp_headers/ioat_spec.o 00:15:18.851 LINK abort 00:15:18.851 CXX test/cpp_headers/iscsi_spec.o 00:15:18.851 LINK fused_ordering 00:15:18.851 CXX test/cpp_headers/json.o 00:15:18.851 LINK nvme_manage 00:15:18.851 CXX test/cpp_headers/jsonrpc.o 00:15:18.851 CXX test/cpp_headers/keyring.o 00:15:18.851 CXX test/cpp_headers/keyring_module.o 00:15:18.851 CXX test/cpp_headers/likely.o 00:15:18.851 CXX test/cpp_headers/log.o 00:15:18.851 CXX test/cpp_headers/lvol.o 00:15:18.851 LINK pci_ut 00:15:18.851 CXX test/cpp_headers/memory.o 00:15:18.851 CXX test/cpp_headers/mmio.o 00:15:18.851 CXX test/cpp_headers/nbd.o 00:15:18.851 LINK spdk_bdev 00:15:18.851 CXX test/cpp_headers/notify.o 00:15:18.851 CXX test/cpp_headers/nvme.o 00:15:18.851 CXX test/cpp_headers/nvme_intel.o 00:15:18.851 CXX test/cpp_headers/nvme_ocssd.o 00:15:18.851 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:18.851 CXX test/cpp_headers/nvme_spec.o 00:15:19.110 CXX test/cpp_headers/nvme_zns.o 00:15:19.110 LINK nvme_compliance 00:15:19.110 CXX test/cpp_headers/nvmf_cmd.o 00:15:19.110 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:19.110 CXX test/cpp_headers/nvmf.o 00:15:19.110 CXX test/cpp_headers/nvmf_transport.o 00:15:19.110 CXX test/cpp_headers/nvmf_spec.o 00:15:19.110 CXX test/cpp_headers/opal.o 00:15:19.110 CXX test/cpp_headers/opal_spec.o 00:15:19.110 CXX test/cpp_headers/pci_ids.o 00:15:19.110 CXX test/cpp_headers/pipe.o 00:15:19.110 CXX test/cpp_headers/queue.o 00:15:19.110 CXX test/cpp_headers/reduce.o 00:15:19.110 CXX test/cpp_headers/rpc.o 00:15:19.110 CXX test/cpp_headers/scheduler.o 00:15:19.110 CXX test/cpp_headers/scsi.o 00:15:19.110 CXX test/cpp_headers/scsi_spec.o 00:15:19.110 CXX test/cpp_headers/sock.o 00:15:19.110 CXX test/cpp_headers/stdinc.o 00:15:19.110 LINK memory_ut 00:15:19.110 CXX test/cpp_headers/string.o 00:15:19.110 LINK fdp 00:15:19.110 CXX test/cpp_headers/thread.o 00:15:19.110 CXX test/cpp_headers/trace.o 00:15:19.368 CXX test/cpp_headers/trace_parser.o 00:15:19.368 CXX test/cpp_headers/tree.o 00:15:19.368 CXX test/cpp_headers/ublk.o 00:15:19.368 CXX test/cpp_headers/util.o 00:15:19.368 CXX test/cpp_headers/uuid.o 00:15:19.368 CXX test/cpp_headers/version.o 00:15:19.368 CXX test/cpp_headers/vfio_user_pci.o 00:15:19.368 CXX test/cpp_headers/vfio_user_spec.o 00:15:19.368 CXX test/cpp_headers/vhost.o 00:15:19.368 CXX test/cpp_headers/vmd.o 00:15:19.368 CXX test/cpp_headers/xor.o 00:15:19.368 CXX test/cpp_headers/zipf.o 00:15:20.301 LINK cuse 00:15:20.301 LINK iscsi_fuzz 00:15:23.582 LINK esnap 00:15:23.582 00:15:23.582 real 0m40.533s 00:15:23.582 user 7m31.368s 00:15:23.582 sys 1m50.862s 00:15:23.582 16:28:43 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:15:23.582 16:28:43 make -- common/autotest_common.sh@10 -- $ set +x 00:15:23.582 ************************************ 00:15:23.582 END TEST make 00:15:23.582 ************************************ 00:15:23.582 16:28:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:15:23.582 16:28:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:15:23.582 16:28:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:15:23.582 16:28:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:15:23.582 16:28:43 -- pm/common@44 -- $ pid=2546547 00:15:23.582 16:28:43 -- pm/common@50 -- $ kill -TERM 2546547 00:15:23.582 16:28:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:15:23.582 16:28:43 -- pm/common@44 -- $ pid=2546549 00:15:23.582 16:28:43 -- pm/common@50 -- $ kill -TERM 2546549 00:15:23.582 16:28:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:15:23.582 16:28:43 -- pm/common@44 -- $ pid=2546550 00:15:23.582 16:28:43 -- pm/common@50 -- $ kill -TERM 2546550 00:15:23.582 16:28:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:15:23.582 16:28:43 -- pm/common@44 -- $ pid=2546577 00:15:23.582 16:28:43 -- pm/common@50 -- $ sudo -E kill -TERM 2546577 00:15:23.582 16:28:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.582 16:28:43 -- nvmf/common.sh@7 -- # uname -s 00:15:23.582 16:28:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.582 16:28:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.582 16:28:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.582 16:28:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.582 16:28:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.582 16:28:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.582 16:28:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.582 16:28:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.582 16:28:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.582 16:28:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.582 16:28:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:23.582 16:28:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:23.582 16:28:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.582 16:28:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.582 16:28:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.582 16:28:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.582 16:28:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.582 16:28:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.582 16:28:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.582 16:28:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.582 16:28:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.582 16:28:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.582 16:28:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.582 16:28:43 -- paths/export.sh@5 -- # export PATH 00:15:23.582 16:28:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.582 16:28:43 -- nvmf/common.sh@47 -- # : 0 00:15:23.582 16:28:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.582 16:28:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.582 16:28:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.582 16:28:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.582 16:28:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.582 16:28:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.582 16:28:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.582 16:28:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.582 16:28:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:23.582 16:28:43 -- spdk/autotest.sh@32 -- # uname -s 00:15:23.582 16:28:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:15:23.582 16:28:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:15:23.582 16:28:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:15:23.582 16:28:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:15:23.582 16:28:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:15:23.582 16:28:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:15:23.582 16:28:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:15:23.582 16:28:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:15:23.582 16:28:43 -- spdk/autotest.sh@48 -- # udevadm_pid=2621953 00:15:23.582 16:28:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:15:23.582 16:28:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:15:23.582 16:28:43 -- pm/common@17 -- # local monitor 00:15:23.582 16:28:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:23.582 16:28:43 -- pm/common@21 -- # date +%s 00:15:23.582 16:28:43 -- pm/common@21 -- # date +%s 00:15:23.582 16:28:43 -- pm/common@25 -- # sleep 1 00:15:23.582 16:28:43 -- pm/common@21 -- # date +%s 00:15:23.582 16:28:43 -- pm/common@21 -- # date +%s 00:15:23.582 16:28:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721658523 00:15:23.582 16:28:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721658523 00:15:23.582 16:28:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721658523 00:15:23.582 16:28:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721658523 00:15:23.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721658523_collect-vmstat.pm.log 00:15:23.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721658523_collect-cpu-load.pm.log 00:15:23.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721658523_collect-cpu-temp.pm.log 00:15:23.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721658523_collect-bmc-pm.bmc.pm.log 00:15:24.516 16:28:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:24.516 16:28:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:15:24.516 16:28:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:24.516 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:15:24.517 16:28:44 -- spdk/autotest.sh@59 -- # create_test_list 00:15:24.517 16:28:44 -- common/autotest_common.sh@744 -- # xtrace_disable 00:15:24.517 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:15:24.774 16:28:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:15:24.774 16:28:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:24.774 16:28:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:24.774 16:28:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:15:24.774 16:28:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:24.774 16:28:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:15:24.774 16:28:44 -- common/autotest_common.sh@1451 -- # uname 00:15:24.774 16:28:44 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:15:24.774 16:28:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:15:24.774 16:28:44 -- common/autotest_common.sh@1471 -- # uname 00:15:24.774 16:28:44 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:15:24.774 16:28:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:15:24.774 16:28:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:15:24.774 16:28:44 -- spdk/autotest.sh@72 -- # hash lcov 00:15:24.774 16:28:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:15:24.774 16:28:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:15:24.774 --rc lcov_branch_coverage=1 00:15:24.774 --rc lcov_function_coverage=1 00:15:24.774 --rc genhtml_branch_coverage=1 00:15:24.774 --rc genhtml_function_coverage=1 00:15:24.774 --rc genhtml_legend=1 00:15:24.774 --rc geninfo_all_blocks=1 00:15:24.774 ' 00:15:24.774 16:28:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:15:24.774 --rc lcov_branch_coverage=1 00:15:24.774 --rc lcov_function_coverage=1 00:15:24.774 --rc genhtml_branch_coverage=1 00:15:24.774 --rc genhtml_function_coverage=1 00:15:24.774 --rc genhtml_legend=1 00:15:24.774 --rc geninfo_all_blocks=1 00:15:24.774 ' 00:15:24.774 16:28:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:15:24.774 --rc lcov_branch_coverage=1 00:15:24.775 --rc lcov_function_coverage=1 00:15:24.775 --rc genhtml_branch_coverage=1 00:15:24.775 --rc genhtml_function_coverage=1 00:15:24.775 --rc genhtml_legend=1 00:15:24.775 --rc geninfo_all_blocks=1 00:15:24.775 --no-external' 00:15:24.775 16:28:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:15:24.775 --rc lcov_branch_coverage=1 00:15:24.775 --rc lcov_function_coverage=1 00:15:24.775 --rc genhtml_branch_coverage=1 00:15:24.775 --rc genhtml_function_coverage=1 00:15:24.775 --rc genhtml_legend=1 00:15:24.775 --rc geninfo_all_blocks=1 00:15:24.775 --no-external' 00:15:24.775 16:28:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:15:24.775 lcov: LCOV version 1.14 00:15:24.775 16:28:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:15:39.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:15:39.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:15:54.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:15:54.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:15:54.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:15:54.535 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:15:57.820 16:29:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:15:57.820 16:29:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:57.820 16:29:16 -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 16:29:16 -- spdk/autotest.sh@91 -- # rm -f 00:15:57.820 16:29:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:15:58.755 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:15:58.755 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:15:58.755 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:15:58.755 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:15:58.755 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:15:58.755 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:15:58.755 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:15:58.755 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:15:58.755 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:15:58.755 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:15:58.755 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:15:58.755 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:15:58.755 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:15:58.755 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:15:58.755 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:15:59.014 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:15:59.014 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:15:59.014 16:29:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:15:59.014 16:29:18 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:59.014 16:29:18 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:59.014 16:29:18 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:59.014 16:29:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:59.014 16:29:18 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:15:59.014 16:29:18 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:15:59.014 16:29:18 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:59.014 16:29:18 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:59.014 16:29:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:15:59.014 16:29:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:59.014 16:29:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:59.014 16:29:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:15:59.014 16:29:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:15:59.014 16:29:18 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:59.014 No valid GPT data, bailing 00:15:59.014 16:29:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:59.014 16:29:18 -- scripts/common.sh@391 -- # pt= 00:15:59.014 16:29:18 -- scripts/common.sh@392 -- # return 1 00:15:59.014 16:29:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:59.014 1+0 records in 00:15:59.014 1+0 records out 00:15:59.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00182534 s, 574 MB/s 00:15:59.014 16:29:18 -- spdk/autotest.sh@118 -- # sync 00:15:59.014 16:29:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:59.014 16:29:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:59.014 16:29:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:00.916 16:29:20 -- spdk/autotest.sh@124 -- # uname -s 00:16:00.916 16:29:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:16:00.916 16:29:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:16:00.916 16:29:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:00.916 16:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.916 16:29:20 -- common/autotest_common.sh@10 -- # set +x 00:16:00.916 ************************************ 00:16:00.916 START TEST setup.sh 00:16:00.916 ************************************ 00:16:00.916 16:29:20 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:16:01.175 * Looking for test storage... 00:16:01.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:16:01.175 16:29:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:16:01.175 16:29:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:16:01.175 16:29:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:16:01.175 16:29:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:01.175 16:29:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.175 16:29:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:01.175 ************************************ 00:16:01.175 START TEST acl 00:16:01.175 ************************************ 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:16:01.175 * Looking for test storage... 00:16:01.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:01.175 16:29:20 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:16:01.175 16:29:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:16:01.175 16:29:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:01.175 16:29:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:03.078 16:29:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:16:03.078 16:29:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:16:03.078 16:29:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:03.078 16:29:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:16:03.078 16:29:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:16:03.078 16:29:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:16:04.014 Hugepages 00:16:04.014 node hugesize free / total 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.014 00:16:04.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:16:04.014 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:81:00.0 == *:*:*.* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:16:04.015 16:29:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:16:04.015 16:29:23 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:04.015 16:29:23 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:04.015 16:29:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 ************************************ 00:16:04.015 START TEST denied 00:16:04.015 ************************************ 00:16:04.015 16:29:23 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:16:04.015 16:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:81:00.0' 00:16:04.015 16:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:16:04.015 16:29:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:81:00.0' 00:16:04.015 16:29:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:16:04.015 16:29:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:05.921 0000:81:00.0 (8086 0a54): Skipping denied controller at 0000:81:00.0 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:81:00.0 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:81:00.0 ]] 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/driver 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:05.921 16:29:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:08.457 00:16:08.457 real 0m4.225s 00:16:08.457 user 0m1.293s 00:16:08.457 sys 0m2.023s 00:16:08.457 16:29:27 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:08.457 16:29:27 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:16:08.457 ************************************ 00:16:08.457 END TEST denied 00:16:08.457 ************************************ 00:16:08.457 16:29:27 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:16:08.457 16:29:27 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:08.457 16:29:27 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.457 16:29:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:16:08.457 ************************************ 00:16:08.457 START TEST allowed 00:16:08.457 ************************************ 00:16:08.457 16:29:27 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:16:08.457 16:29:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:81:00.0 00:16:08.457 16:29:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:16:08.457 16:29:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:81:00.0 .*: nvme -> .*' 00:16:08.457 16:29:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:16:08.457 16:29:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:11.748 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:16:11.748 16:29:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:16:11.748 16:29:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:16:11.748 16:29:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:16:11.748 16:29:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:11.748 16:29:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:13.662 00:16:13.662 real 0m5.165s 00:16:13.662 user 0m1.177s 00:16:13.662 sys 0m1.912s 00:16:13.662 16:29:33 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.662 16:29:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:16:13.662 ************************************ 00:16:13.662 END TEST allowed 00:16:13.662 ************************************ 00:16:13.662 00:16:13.662 real 0m12.451s 00:16:13.662 user 0m3.732s 00:16:13.662 sys 0m5.823s 00:16:13.662 16:29:33 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.662 16:29:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:16:13.662 ************************************ 00:16:13.662 END TEST acl 00:16:13.662 ************************************ 00:16:13.662 16:29:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:16:13.662 16:29:33 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:13.662 16:29:33 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.662 16:29:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:13.662 ************************************ 00:16:13.662 START TEST hugepages 00:16:13.662 ************************************ 00:16:13.662 16:29:33 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:16:13.662 * Looking for test storage... 00:16:13.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.662 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36887432 kB' 'MemAvailable: 40668008 kB' 'Buffers: 8316 kB' 'Cached: 16576012 kB' 'SwapCached: 0 kB' 'Active: 13785644 kB' 'Inactive: 3538764 kB' 'Active(anon): 13336880 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743364 kB' 'Mapped: 212792 kB' 'Shmem: 12596800 kB' 'KReclaimable: 437484 kB' 'Slab: 825224 kB' 'SReclaimable: 437484 kB' 'SUnreclaim: 387740 kB' 'KernelStack: 12992 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 15090980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198828 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.663 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:13.664 16:29:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:16:13.664 16:29:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:13.664 16:29:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.664 16:29:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:13.664 ************************************ 00:16:13.664 START TEST default_setup 00:16:13.664 ************************************ 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:13.664 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:16:13.665 16:29:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:15.043 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:15.043 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:15.043 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:15.043 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:15.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:17.216 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39000480 kB' 'MemAvailable: 42780952 kB' 'Buffers: 8316 kB' 'Cached: 16576124 kB' 'SwapCached: 0 kB' 'Active: 13804092 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355328 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761800 kB' 'Mapped: 212884 kB' 'Shmem: 12596912 kB' 'KReclaimable: 437380 kB' 'Slab: 824652 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387272 kB' 'KernelStack: 12912 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198908 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.216 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.217 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39003532 kB' 'MemAvailable: 42784004 kB' 'Buffers: 8316 kB' 'Cached: 16576124 kB' 'SwapCached: 0 kB' 'Active: 13804704 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355940 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 762436 kB' 'Mapped: 212884 kB' 'Shmem: 12596912 kB' 'KReclaimable: 437380 kB' 'Slab: 824644 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387264 kB' 'KernelStack: 12912 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.218 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.219 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39003544 kB' 'MemAvailable: 42784016 kB' 'Buffers: 8316 kB' 'Cached: 16576144 kB' 'SwapCached: 0 kB' 'Active: 13803736 kB' 'Inactive: 3538764 kB' 'Active(anon): 13354972 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761368 kB' 'Mapped: 212848 kB' 'Shmem: 12596932 kB' 'KReclaimable: 437380 kB' 'Slab: 824652 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387272 kB' 'KernelStack: 12864 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198876 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.220 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.221 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:17.222 nr_hugepages=1024 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:17.222 resv_hugepages=0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:17.222 surplus_hugepages=0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:17.222 anon_hugepages=0 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39003304 kB' 'MemAvailable: 42783776 kB' 'Buffers: 8316 kB' 'Cached: 16576164 kB' 'SwapCached: 0 kB' 'Active: 13803924 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355160 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761576 kB' 'Mapped: 212848 kB' 'Shmem: 12596952 kB' 'KReclaimable: 437380 kB' 'Slab: 824652 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387272 kB' 'KernelStack: 12896 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.222 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.223 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 24883904 kB' 'MemUsed: 7945980 kB' 'SwapCached: 0 kB' 'Active: 4749664 kB' 'Inactive: 138860 kB' 'Active(anon): 4408904 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334048 kB' 'Mapped: 117500 kB' 'AnonPages: 557680 kB' 'Shmem: 3854428 kB' 'KernelStack: 6616 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542248 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.224 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.225 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:17.226 node0=1024 expecting 1024 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:17.226 00:16:17.226 real 0m3.571s 00:16:17.226 user 0m0.746s 00:16:17.226 sys 0m1.015s 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:17.226 16:29:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:16:17.226 ************************************ 00:16:17.226 END TEST default_setup 00:16:17.226 ************************************ 00:16:17.226 16:29:36 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:16:17.226 16:29:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:17.226 16:29:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:17.226 16:29:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 ************************************ 00:16:17.485 START TEST per_node_1G_alloc 00:16:17.485 ************************************ 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:17.485 16:29:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:18.870 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:18.870 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:18.870 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:18.870 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:18.870 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:18.870 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:18.870 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:18.870 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:18.870 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:18.870 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:18.870 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:18.870 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:18.870 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:18.870 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:18.870 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:18.870 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:18.870 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.870 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38998396 kB' 'MemAvailable: 42778868 kB' 'Buffers: 8316 kB' 'Cached: 16576228 kB' 'SwapCached: 0 kB' 'Active: 13804644 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355880 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 762088 kB' 'Mapped: 212916 kB' 'Shmem: 12597016 kB' 'KReclaimable: 437380 kB' 'Slab: 824580 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387200 kB' 'KernelStack: 12944 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.871 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38998492 kB' 'MemAvailable: 42778964 kB' 'Buffers: 8316 kB' 'Cached: 16576228 kB' 'SwapCached: 0 kB' 'Active: 13804424 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355660 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761880 kB' 'Mapped: 212872 kB' 'Shmem: 12597016 kB' 'KReclaimable: 437380 kB' 'Slab: 824516 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387136 kB' 'KernelStack: 12960 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.872 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.873 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38999248 kB' 'MemAvailable: 42779720 kB' 'Buffers: 8316 kB' 'Cached: 16576232 kB' 'SwapCached: 0 kB' 'Active: 13804040 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355276 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761492 kB' 'Mapped: 212872 kB' 'Shmem: 12597020 kB' 'KReclaimable: 437380 kB' 'Slab: 824572 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387192 kB' 'KernelStack: 12928 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.874 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.875 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:18.876 nr_hugepages=1024 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:18.876 resv_hugepages=0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:18.876 surplus_hugepages=0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:18.876 anon_hugepages=0 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38998744 kB' 'MemAvailable: 42779216 kB' 'Buffers: 8316 kB' 'Cached: 16576232 kB' 'SwapCached: 0 kB' 'Active: 13804148 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355384 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761600 kB' 'Mapped: 212872 kB' 'Shmem: 12597020 kB' 'KReclaimable: 437380 kB' 'Slab: 824572 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387192 kB' 'KernelStack: 12912 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15111812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.876 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.877 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25927896 kB' 'MemUsed: 6901988 kB' 'SwapCached: 0 kB' 'Active: 4751776 kB' 'Inactive: 138860 kB' 'Active(anon): 4411016 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334092 kB' 'Mapped: 117512 kB' 'AnonPages: 559680 kB' 'Shmem: 3854472 kB' 'KernelStack: 6648 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542256 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.878 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.879 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 13070092 kB' 'MemUsed: 14641752 kB' 'SwapCached: 0 kB' 'Active: 9052484 kB' 'Inactive: 3399904 kB' 'Active(anon): 8944480 kB' 'Inactive(anon): 0 kB' 'Active(file): 108004 kB' 'Inactive(file): 3399904 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12250456 kB' 'Mapped: 95360 kB' 'AnonPages: 202032 kB' 'Shmem: 8742548 kB' 'KernelStack: 6248 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123264 kB' 'Slab: 282316 kB' 'SReclaimable: 123264 kB' 'SUnreclaim: 159052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.880 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:18.881 node0=512 expecting 512 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:16:18.881 node1=512 expecting 512 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:18.881 00:16:18.881 real 0m1.533s 00:16:18.881 user 0m0.645s 00:16:18.881 sys 0m0.857s 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:18.881 16:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:16:18.881 ************************************ 00:16:18.881 END TEST per_node_1G_alloc 00:16:18.881 ************************************ 00:16:18.881 16:29:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:16:18.881 16:29:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:18.881 16:29:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:18.881 16:29:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:18.881 ************************************ 00:16:18.881 START TEST even_2G_alloc 00:16:18.881 ************************************ 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:18.881 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:18.882 16:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:20.262 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:20.262 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:20.262 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:20.262 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:20.262 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:20.262 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:20.262 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:20.262 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:20.262 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:20.262 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:20.262 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:20.262 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:20.262 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:20.262 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:20.262 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:20.262 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:20.262 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38975680 kB' 'MemAvailable: 42756152 kB' 'Buffers: 8316 kB' 'Cached: 16576372 kB' 'SwapCached: 0 kB' 'Active: 13804936 kB' 'Inactive: 3538764 kB' 'Active(anon): 13356172 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 762224 kB' 'Mapped: 213024 kB' 'Shmem: 12597160 kB' 'KReclaimable: 437380 kB' 'Slab: 824728 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387348 kB' 'KernelStack: 12816 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15112016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.262 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:20.263 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38975136 kB' 'MemAvailable: 42755608 kB' 'Buffers: 8316 kB' 'Cached: 16576372 kB' 'SwapCached: 0 kB' 'Active: 13804392 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355628 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761708 kB' 'Mapped: 212912 kB' 'Shmem: 12597160 kB' 'KReclaimable: 437380 kB' 'Slab: 824736 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387356 kB' 'KernelStack: 12896 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15112032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.264 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.265 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.531 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38978424 kB' 'MemAvailable: 42758896 kB' 'Buffers: 8316 kB' 'Cached: 16576392 kB' 'SwapCached: 0 kB' 'Active: 13804432 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355668 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761756 kB' 'Mapped: 212912 kB' 'Shmem: 12597180 kB' 'KReclaimable: 437380 kB' 'Slab: 824736 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387356 kB' 'KernelStack: 12896 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15112056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.532 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:20.533 nr_hugepages=1024 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:20.533 resv_hugepages=0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:20.533 surplus_hugepages=0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:20.533 anon_hugepages=0 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38980032 kB' 'MemAvailable: 42760504 kB' 'Buffers: 8316 kB' 'Cached: 16576396 kB' 'SwapCached: 0 kB' 'Active: 13802772 kB' 'Inactive: 3538764 kB' 'Active(anon): 13354008 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 760152 kB' 'Mapped: 212040 kB' 'Shmem: 12597184 kB' 'KReclaimable: 437380 kB' 'Slab: 824736 kB' 'SReclaimable: 437380 kB' 'SUnreclaim: 387356 kB' 'KernelStack: 12912 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15101004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.533 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.534 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25927360 kB' 'MemUsed: 6902524 kB' 'SwapCached: 0 kB' 'Active: 4751912 kB' 'Inactive: 138860 kB' 'Active(anon): 4411152 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334256 kB' 'Mapped: 117020 kB' 'AnonPages: 559780 kB' 'Shmem: 3854636 kB' 'KernelStack: 6728 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542468 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.535 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.536 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 13059360 kB' 'MemUsed: 14652484 kB' 'SwapCached: 0 kB' 'Active: 9049400 kB' 'Inactive: 3399904 kB' 'Active(anon): 8941396 kB' 'Inactive(anon): 0 kB' 'Active(file): 108004 kB' 'Inactive(file): 3399904 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12250472 kB' 'Mapped: 94800 kB' 'AnonPages: 198856 kB' 'Shmem: 8742564 kB' 'KernelStack: 6232 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123264 kB' 'Slab: 282256 kB' 'SReclaimable: 123264 kB' 'SUnreclaim: 158992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.537 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:20.538 node0=512 expecting 512 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:16:20.538 node1=512 expecting 512 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:20.538 00:16:20.538 real 0m1.553s 00:16:20.538 user 0m0.643s 00:16:20.538 sys 0m0.876s 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.538 16:29:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:16:20.538 ************************************ 00:16:20.538 END TEST even_2G_alloc 00:16:20.538 ************************************ 00:16:20.538 16:29:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:16:20.538 16:29:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:20.538 16:29:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.538 16:29:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:20.538 ************************************ 00:16:20.538 START TEST odd_alloc 00:16:20.538 ************************************ 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:20.538 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:20.539 16:29:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:21.921 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:21.921 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:21.921 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:21.921 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:21.921 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:21.921 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:21.921 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:21.921 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:21.922 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:21.922 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:21.922 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:21.922 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:21.922 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:21.922 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:21.922 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:21.922 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:21.922 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38986456 kB' 'MemAvailable: 42766920 kB' 'Buffers: 8316 kB' 'Cached: 16576500 kB' 'SwapCached: 0 kB' 'Active: 13801624 kB' 'Inactive: 3538764 kB' 'Active(anon): 13352860 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758856 kB' 'Mapped: 211904 kB' 'Shmem: 12597288 kB' 'KReclaimable: 437372 kB' 'Slab: 824528 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387156 kB' 'KernelStack: 12800 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15098836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198908 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.922 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38986604 kB' 'MemAvailable: 42767068 kB' 'Buffers: 8316 kB' 'Cached: 16576504 kB' 'SwapCached: 0 kB' 'Active: 13801316 kB' 'Inactive: 3538764 kB' 'Active(anon): 13352552 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758504 kB' 'Mapped: 211852 kB' 'Shmem: 12597292 kB' 'KReclaimable: 437372 kB' 'Slab: 824528 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387156 kB' 'KernelStack: 12832 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15098852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.923 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.924 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38986352 kB' 'MemAvailable: 42766816 kB' 'Buffers: 8316 kB' 'Cached: 16576524 kB' 'SwapCached: 0 kB' 'Active: 13801560 kB' 'Inactive: 3538764 kB' 'Active(anon): 13352796 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758728 kB' 'Mapped: 211852 kB' 'Shmem: 12597312 kB' 'KReclaimable: 437372 kB' 'Slab: 824496 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387124 kB' 'KernelStack: 12848 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15098872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198876 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.925 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.926 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:16:21.927 nr_hugepages=1025 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:21.927 resv_hugepages=0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:21.927 surplus_hugepages=0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:21.927 anon_hugepages=0 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38987576 kB' 'MemAvailable: 42768040 kB' 'Buffers: 8316 kB' 'Cached: 16576544 kB' 'SwapCached: 0 kB' 'Active: 13803824 kB' 'Inactive: 3538764 kB' 'Active(anon): 13355060 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761004 kB' 'Mapped: 212288 kB' 'Shmem: 12597332 kB' 'KReclaimable: 437372 kB' 'Slab: 824488 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387116 kB' 'KernelStack: 12816 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 15102360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.927 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.928 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25917024 kB' 'MemUsed: 6912860 kB' 'SwapCached: 0 kB' 'Active: 4752580 kB' 'Inactive: 138860 kB' 'Active(anon): 4411820 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334332 kB' 'Mapped: 117036 kB' 'AnonPages: 560264 kB' 'Shmem: 3854712 kB' 'KernelStack: 6568 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542280 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.929 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.930 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 13069040 kB' 'MemUsed: 14642804 kB' 'SwapCached: 0 kB' 'Active: 9049600 kB' 'Inactive: 3399904 kB' 'Active(anon): 8941596 kB' 'Inactive(anon): 0 kB' 'Active(file): 108004 kB' 'Inactive(file): 3399904 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12250532 kB' 'Mapped: 94816 kB' 'AnonPages: 199060 kB' 'Shmem: 8742624 kB' 'KernelStack: 6280 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123256 kB' 'Slab: 282208 kB' 'SReclaimable: 123256 kB' 'SUnreclaim: 158952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.931 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:16:21.932 node0=512 expecting 513 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:16:21.932 node1=513 expecting 512 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:16:21.932 00:16:21.932 real 0m1.503s 00:16:21.932 user 0m0.625s 00:16:21.932 sys 0m0.841s 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.932 16:29:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:16:21.932 ************************************ 00:16:21.932 END TEST odd_alloc 00:16:21.932 ************************************ 00:16:22.191 16:29:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:16:22.191 16:29:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:22.191 16:29:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:22.191 16:29:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:22.191 ************************************ 00:16:22.191 START TEST custom_alloc 00:16:22.191 ************************************ 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:22.191 16:29:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:23.573 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:23.573 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:23.573 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:23.573 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:23.573 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:23.573 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:23.573 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:23.573 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:23.573 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:23.573 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:23.573 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:23.573 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:23.573 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:23.573 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:23.573 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:23.573 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:23.573 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37925636 kB' 'MemAvailable: 41706100 kB' 'Buffers: 8316 kB' 'Cached: 16576632 kB' 'SwapCached: 0 kB' 'Active: 13802904 kB' 'Inactive: 3538764 kB' 'Active(anon): 13354140 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759980 kB' 'Mapped: 211948 kB' 'Shmem: 12597420 kB' 'KReclaimable: 437372 kB' 'Slab: 824360 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386988 kB' 'KernelStack: 12832 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15098732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.573 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37924640 kB' 'MemAvailable: 41705104 kB' 'Buffers: 8316 kB' 'Cached: 16576632 kB' 'SwapCached: 0 kB' 'Active: 13801972 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353208 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759000 kB' 'Mapped: 211892 kB' 'Shmem: 12597420 kB' 'KReclaimable: 437372 kB' 'Slab: 824328 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386956 kB' 'KernelStack: 12784 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15098752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198908 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.574 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.575 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37924388 kB' 'MemAvailable: 41704852 kB' 'Buffers: 8316 kB' 'Cached: 16576660 kB' 'SwapCached: 0 kB' 'Active: 13802204 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353440 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759228 kB' 'Mapped: 211816 kB' 'Shmem: 12597448 kB' 'KReclaimable: 437372 kB' 'Slab: 824328 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386956 kB' 'KernelStack: 12848 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15099140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.576 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.577 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:16:23.578 nr_hugepages=1536 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:23.578 resv_hugepages=0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:23.578 surplus_hugepages=0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:23.578 anon_hugepages=0 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.578 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37924388 kB' 'MemAvailable: 41704852 kB' 'Buffers: 8316 kB' 'Cached: 16576680 kB' 'SwapCached: 0 kB' 'Active: 13802216 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353452 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759244 kB' 'Mapped: 211816 kB' 'Shmem: 12597468 kB' 'KReclaimable: 437372 kB' 'Slab: 824328 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386956 kB' 'KernelStack: 12848 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 15099164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.579 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25909540 kB' 'MemUsed: 6920344 kB' 'SwapCached: 0 kB' 'Active: 4753168 kB' 'Inactive: 138860 kB' 'Active(anon): 4412408 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334408 kB' 'Mapped: 117044 kB' 'AnonPages: 560764 kB' 'Shmem: 3854788 kB' 'KernelStack: 6600 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542184 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.580 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.581 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 12014092 kB' 'MemUsed: 15697752 kB' 'SwapCached: 0 kB' 'Active: 9048776 kB' 'Inactive: 3399904 kB' 'Active(anon): 8940772 kB' 'Inactive(anon): 0 kB' 'Active(file): 108004 kB' 'Inactive(file): 3399904 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12250592 kB' 'Mapped: 94772 kB' 'AnonPages: 198208 kB' 'Shmem: 8742684 kB' 'KernelStack: 6248 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123256 kB' 'Slab: 282144 kB' 'SReclaimable: 123256 kB' 'SUnreclaim: 158888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.582 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.583 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:23.842 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:23.843 node0=512 expecting 512 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:16:23.843 node1=1024 expecting 1024 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:16:23.843 00:16:23.843 real 0m1.625s 00:16:23.843 user 0m0.681s 00:16:23.843 sys 0m0.912s 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.843 16:29:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:16:23.843 ************************************ 00:16:23.843 END TEST custom_alloc 00:16:23.843 ************************************ 00:16:23.843 16:29:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:16:23.843 16:29:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:23.843 16:29:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.843 16:29:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:23.843 ************************************ 00:16:23.843 START TEST no_shrink_alloc 00:16:23.843 ************************************ 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:23.843 16:29:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:25.221 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:25.221 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:25.221 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:25.221 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:25.221 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:25.221 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:25.221 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:25.221 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:25.221 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:25.221 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:25.221 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:25.221 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:25.221 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:25.221 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:25.221 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:25.221 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:25.221 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38962780 kB' 'MemAvailable: 42743244 kB' 'Buffers: 8316 kB' 'Cached: 16576764 kB' 'SwapCached: 0 kB' 'Active: 13802428 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353664 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759312 kB' 'Mapped: 211868 kB' 'Shmem: 12597552 kB' 'KReclaimable: 437372 kB' 'Slab: 824364 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386992 kB' 'KernelStack: 12832 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.221 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.222 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38964416 kB' 'MemAvailable: 42744880 kB' 'Buffers: 8316 kB' 'Cached: 16576768 kB' 'SwapCached: 0 kB' 'Active: 13802692 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353928 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759600 kB' 'Mapped: 211832 kB' 'Shmem: 12597556 kB' 'KReclaimable: 437372 kB' 'Slab: 824356 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386984 kB' 'KernelStack: 12848 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.223 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.224 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38971800 kB' 'MemAvailable: 42752264 kB' 'Buffers: 8316 kB' 'Cached: 16576768 kB' 'SwapCached: 0 kB' 'Active: 13802416 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353652 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759420 kB' 'Mapped: 211828 kB' 'Shmem: 12597556 kB' 'KReclaimable: 437372 kB' 'Slab: 824324 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 386952 kB' 'KernelStack: 12848 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.488 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.489 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:25.490 nr_hugepages=1024 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:25.490 resv_hugepages=0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:25.490 surplus_hugepages=0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:25.490 anon_hugepages=0 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38974944 kB' 'MemAvailable: 42755408 kB' 'Buffers: 8316 kB' 'Cached: 16576772 kB' 'SwapCached: 0 kB' 'Active: 13802564 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353800 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759600 kB' 'Mapped: 211828 kB' 'Shmem: 12597560 kB' 'KReclaimable: 437372 kB' 'Slab: 824372 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387000 kB' 'KernelStack: 12832 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:25.490 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.491 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:25.492 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 24859152 kB' 'MemUsed: 7970732 kB' 'SwapCached: 0 kB' 'Active: 4756344 kB' 'Inactive: 138860 kB' 'Active(anon): 4415584 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334488 kB' 'Mapped: 117056 kB' 'AnonPages: 564036 kB' 'Shmem: 3854868 kB' 'KernelStack: 6584 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542260 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.493 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:25.494 node0=1024 expecting 1024 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:16:25.494 16:29:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:26.872 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:26.872 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:16:26.872 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:26.872 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:26.872 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:26.872 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:26.872 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:26.872 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:26.872 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:26.872 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:16:26.872 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:16:26.872 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:16:26.872 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:16:26.872 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:16:26.872 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:16:26.872 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:16:26.872 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:16:26.872 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38966120 kB' 'MemAvailable: 42746584 kB' 'Buffers: 8316 kB' 'Cached: 16576880 kB' 'SwapCached: 0 kB' 'Active: 13802352 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353588 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759136 kB' 'Mapped: 211828 kB' 'Shmem: 12597668 kB' 'KReclaimable: 437372 kB' 'Slab: 824552 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387180 kB' 'KernelStack: 12848 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.872 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.873 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38965868 kB' 'MemAvailable: 42746332 kB' 'Buffers: 8316 kB' 'Cached: 16576880 kB' 'SwapCached: 0 kB' 'Active: 13802900 kB' 'Inactive: 3538764 kB' 'Active(anon): 13354136 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759688 kB' 'Mapped: 211836 kB' 'Shmem: 12597668 kB' 'KReclaimable: 437372 kB' 'Slab: 824552 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387180 kB' 'KernelStack: 12864 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.874 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.875 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:26.876 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38966364 kB' 'MemAvailable: 42746828 kB' 'Buffers: 8316 kB' 'Cached: 16576892 kB' 'SwapCached: 0 kB' 'Active: 13802396 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353632 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759212 kB' 'Mapped: 211836 kB' 'Shmem: 12597680 kB' 'KReclaimable: 437372 kB' 'Slab: 824640 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387268 kB' 'KernelStack: 12864 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.138 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.139 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:27.140 nr_hugepages=1024 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:27.140 resv_hugepages=0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:27.140 surplus_hugepages=0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:27.140 anon_hugepages=0 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38967480 kB' 'MemAvailable: 42747944 kB' 'Buffers: 8316 kB' 'Cached: 16576924 kB' 'SwapCached: 0 kB' 'Active: 13802756 kB' 'Inactive: 3538764 kB' 'Active(anon): 13353992 kB' 'Inactive(anon): 0 kB' 'Active(file): 448764 kB' 'Inactive(file): 3538764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759524 kB' 'Mapped: 211836 kB' 'Shmem: 12597712 kB' 'KReclaimable: 437372 kB' 'Slab: 824632 kB' 'SReclaimable: 437372 kB' 'SUnreclaim: 387260 kB' 'KernelStack: 12848 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15099964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 40704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2051676 kB' 'DirectMap2M: 26179584 kB' 'DirectMap1G: 40894464 kB' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.140 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.141 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 24842844 kB' 'MemUsed: 7987040 kB' 'SwapCached: 0 kB' 'Active: 4754456 kB' 'Inactive: 138860 kB' 'Active(anon): 4413696 kB' 'Inactive(anon): 0 kB' 'Active(file): 340760 kB' 'Inactive(file): 138860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4334648 kB' 'Mapped: 117064 kB' 'AnonPages: 561824 kB' 'Shmem: 3855028 kB' 'KernelStack: 6632 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314116 kB' 'Slab: 542280 kB' 'SReclaimable: 314116 kB' 'SUnreclaim: 228164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.142 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:27.143 node0=1024 expecting 1024 00:16:27.143 16:29:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:27.143 00:16:27.143 real 0m3.321s 00:16:27.143 user 0m1.380s 00:16:27.144 sys 0m1.877s 00:16:27.144 16:29:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.144 16:29:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 ************************************ 00:16:27.144 END TEST no_shrink_alloc 00:16:27.144 ************************************ 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:27.144 16:29:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:27.144 00:16:27.144 real 0m13.498s 00:16:27.144 user 0m4.903s 00:16:27.144 sys 0m6.607s 00:16:27.144 16:29:46 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.144 16:29:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 ************************************ 00:16:27.144 END TEST hugepages 00:16:27.144 ************************************ 00:16:27.144 16:29:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:16:27.144 16:29:46 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:27.144 16:29:46 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.144 16:29:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 ************************************ 00:16:27.144 START TEST driver 00:16:27.144 ************************************ 00:16:27.144 16:29:46 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:16:27.144 * Looking for test storage... 00:16:27.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:16:27.144 16:29:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:16:27.144 16:29:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:27.144 16:29:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:30.434 16:29:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:16:30.434 16:29:49 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:30.434 16:29:49 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:30.434 16:29:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:16:30.434 ************************************ 00:16:30.434 START TEST guess_driver 00:16:30.434 ************************************ 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:16:30.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:16:30.434 Looking for driver=vfio-pci 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:16:30.434 16:29:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:31.370 16:29:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:33.280 16:29:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:36.566 00:16:36.566 real 0m6.231s 00:16:36.566 user 0m1.322s 00:16:36.566 sys 0m2.102s 00:16:36.566 16:29:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.566 16:29:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 ************************************ 00:16:36.566 END TEST guess_driver 00:16:36.566 ************************************ 00:16:36.566 00:16:36.566 real 0m9.065s 00:16:36.566 user 0m2.013s 00:16:36.566 sys 0m3.246s 00:16:36.566 16:29:55 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.566 16:29:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 ************************************ 00:16:36.566 END TEST driver 00:16:36.566 ************************************ 00:16:36.566 16:29:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:16:36.566 16:29:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:36.566 16:29:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.566 16:29:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:36.566 ************************************ 00:16:36.566 START TEST devices 00:16:36.566 ************************************ 00:16:36.566 16:29:55 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:16:36.566 * Looking for test storage... 00:16:36.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:16:36.566 16:29:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:16:36.566 16:29:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:16:36.566 16:29:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:36.566 16:29:55 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:81:00.0 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:37.942 16:29:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:16:37.942 No valid GPT data, bailing 00:16:37.942 16:29:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:16:37.942 16:29:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:37.942 16:29:57 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:81:00.0 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:16:37.942 16:29:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.942 16:29:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:16:37.942 ************************************ 00:16:37.942 START TEST nvme_mount 00:16:37.942 ************************************ 00:16:37.942 16:29:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:16:37.942 16:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:16:37.942 16:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:16:37.942 16:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:37.943 16:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:16:39.322 Creating new GPT entries in memory. 00:16:39.322 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:39.322 other utilities. 00:16:39.322 16:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:16:39.322 16:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:39.322 16:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:39.322 16:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:39.322 16:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:16:40.258 Creating new GPT entries in memory. 00:16:40.258 The operation has completed successfully. 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2644305 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:81:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:40.258 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:40.259 16:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.635 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:41.636 16:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:41.636 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:41.636 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:41.894 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:16:41.894 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:16:41.894 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:41.894 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:81:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:41.894 16:30:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.279 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:81:00.0 data@nvme0n1 '' '' 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:43.280 16:30:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:44.655 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:44.915 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:44.915 00:16:44.915 real 0m6.807s 00:16:44.915 user 0m1.696s 00:16:44.915 sys 0m2.663s 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.915 16:30:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:16:44.915 ************************************ 00:16:44.915 END TEST nvme_mount 00:16:44.915 ************************************ 00:16:44.915 16:30:04 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:16:44.915 16:30:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:44.915 16:30:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.915 16:30:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:16:44.915 ************************************ 00:16:44.915 START TEST dm_mount 00:16:44.915 ************************************ 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:44.915 16:30:04 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:16:45.852 Creating new GPT entries in memory. 00:16:45.852 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:45.852 other utilities. 00:16:45.852 16:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:16:45.852 16:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:45.852 16:30:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:45.852 16:30:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:45.852 16:30:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:16:47.245 Creating new GPT entries in memory. 00:16:47.245 The operation has completed successfully. 00:16:47.245 16:30:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:16:47.245 16:30:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:47.245 16:30:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:47.245 16:30:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:47.245 16:30:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:16:48.180 The operation has completed successfully. 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2647101 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:48.180 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:81:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:48.181 16:30:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.555 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:49.556 16:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:81:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:16:49.556 16:30:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:16:50.931 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:16:51.191 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:16:51.191 00:16:51.191 real 0m6.227s 00:16:51.191 user 0m1.157s 00:16:51.191 sys 0m1.924s 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.191 16:30:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:16:51.191 ************************************ 00:16:51.191 END TEST dm_mount 00:16:51.191 ************************************ 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:51.191 16:30:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:51.450 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:16:51.450 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:16:51.450 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:51.450 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:16:51.450 16:30:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:16:51.450 00:16:51.450 real 0m15.177s 00:16:51.450 user 0m3.606s 00:16:51.450 sys 0m5.747s 00:16:51.450 16:30:10 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.450 16:30:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:16:51.450 ************************************ 00:16:51.450 END TEST devices 00:16:51.450 ************************************ 00:16:51.450 00:16:51.450 real 0m50.434s 00:16:51.450 user 0m14.352s 00:16:51.450 sys 0m21.585s 00:16:51.450 16:30:10 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.450 16:30:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:16:51.450 ************************************ 00:16:51.450 END TEST setup.sh 00:16:51.450 ************************************ 00:16:51.450 16:30:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:16:52.826 Hugepages 00:16:52.826 node hugesize free / total 00:16:52.826 node0 1048576kB 0 / 0 00:16:52.826 node0 2048kB 2048 / 2048 00:16:52.826 node1 1048576kB 0 / 0 00:16:52.826 node1 2048kB 0 / 0 00:16:52.826 00:16:52.826 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:52.826 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:16:52.826 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:16:52.826 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:16:53.084 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:16:53.084 16:30:12 -- spdk/autotest.sh@130 -- # uname -s 00:16:53.084 16:30:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:16:53.084 16:30:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:16:53.084 16:30:12 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:16:54.460 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:54.460 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:54.460 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:56.366 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:16:56.366 16:30:15 -- common/autotest_common.sh@1528 -- # sleep 1 00:16:57.302 16:30:16 -- common/autotest_common.sh@1529 -- # bdfs=() 00:16:57.302 16:30:16 -- common/autotest_common.sh@1529 -- # local bdfs 00:16:57.302 16:30:16 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:16:57.302 16:30:16 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:16:57.302 16:30:16 -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:57.302 16:30:16 -- common/autotest_common.sh@1509 -- # local bdfs 00:16:57.302 16:30:16 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:57.302 16:30:16 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:57.302 16:30:16 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:16:57.302 16:30:16 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:16:57.302 16:30:16 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:81:00.0 00:16:57.302 16:30:16 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:16:58.678 Waiting for block devices as requested 00:16:58.678 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:16:58.937 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:16:58.937 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:16:59.196 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:16:59.196 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:16:59.196 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:16:59.196 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:16:59.455 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:16:59.455 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:16:59.455 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:16:59.455 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:16:59.713 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:16:59.713 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:16:59.713 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:16:59.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:16:59.973 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:16:59.973 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:16:59.973 16:30:19 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:16:59.973 16:30:19 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1498 -- # grep 0000:81:00.0/nvme/nvme 00:16:59.973 16:30:19 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:16:59.973 16:30:19 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:16:59.973 16:30:19 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1541 -- # grep oacs 00:16:59.973 16:30:19 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:16:59.973 16:30:19 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:16:59.973 16:30:19 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:16:59.973 16:30:19 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:16:59.973 16:30:19 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:16:59.973 16:30:19 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:16:59.973 16:30:19 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:16:59.973 16:30:19 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:16:59.973 16:30:19 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:16:59.973 16:30:19 -- common/autotest_common.sh@1553 -- # continue 00:16:59.973 16:30:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:16:59.973 16:30:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.973 16:30:19 -- common/autotest_common.sh@10 -- # set +x 00:17:00.233 16:30:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:17:00.233 16:30:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:00.233 16:30:19 -- common/autotest_common.sh@10 -- # set +x 00:17:00.233 16:30:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:17:01.610 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:17:01.610 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:17:01.610 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:17:03.513 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:17:03.513 16:30:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:17:03.513 16:30:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.513 16:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.513 16:30:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:17:03.513 16:30:23 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:17:03.513 16:30:23 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:17:03.513 16:30:23 -- common/autotest_common.sh@1573 -- # bdfs=() 00:17:03.513 16:30:23 -- common/autotest_common.sh@1573 -- # local bdfs 00:17:03.513 16:30:23 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:17:03.513 16:30:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:03.513 16:30:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:17:03.513 16:30:23 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:03.513 16:30:23 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:03.513 16:30:23 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:17:03.770 16:30:23 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:17:03.770 16:30:23 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:81:00.0 00:17:03.770 16:30:23 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:17:03.770 16:30:23 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:17:03.770 16:30:23 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:17:03.770 16:30:23 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:17:03.770 16:30:23 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:17:03.770 16:30:23 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:81:00.0 00:17:03.770 16:30:23 -- common/autotest_common.sh@1588 -- # [[ -z 0000:81:00.0 ]] 00:17:03.770 16:30:23 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2653529 00:17:03.770 16:30:23 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:03.770 16:30:23 -- common/autotest_common.sh@1594 -- # waitforlisten 2653529 00:17:03.770 16:30:23 -- common/autotest_common.sh@827 -- # '[' -z 2653529 ']' 00:17:03.770 16:30:23 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.770 16:30:23 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:03.770 16:30:23 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.770 16:30:23 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:03.770 16:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.770 [2024-07-22 16:30:23.245308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:03.770 [2024-07-22 16:30:23.245407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653529 ] 00:17:03.770 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.770 [2024-07-22 16:30:23.311819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.770 [2024-07-22 16:30:23.398816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.028 16:30:23 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:04.028 16:30:23 -- common/autotest_common.sh@860 -- # return 0 00:17:04.028 16:30:23 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:17:04.028 16:30:23 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:17:04.028 16:30:23 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:17:07.330 nvme0n1 00:17:07.330 16:30:26 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:17:07.330 [2024-07-22 16:30:26.935372] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:17:07.330 request: 00:17:07.330 { 00:17:07.330 "nvme_ctrlr_name": "nvme0", 00:17:07.330 "password": "test", 00:17:07.330 "method": "bdev_nvme_opal_revert", 00:17:07.330 "req_id": 1 00:17:07.330 } 00:17:07.330 Got JSON-RPC error response 00:17:07.330 response: 00:17:07.330 { 00:17:07.330 "code": -32602, 00:17:07.330 "message": "Invalid parameters" 00:17:07.330 } 00:17:07.330 16:30:26 -- common/autotest_common.sh@1600 -- # true 00:17:07.330 16:30:26 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:17:07.330 16:30:26 -- common/autotest_common.sh@1604 -- # killprocess 2653529 00:17:07.330 16:30:26 -- common/autotest_common.sh@946 -- # '[' -z 2653529 ']' 00:17:07.330 16:30:26 -- common/autotest_common.sh@950 -- # kill -0 2653529 00:17:07.330 16:30:26 -- common/autotest_common.sh@951 -- # uname 00:17:07.330 16:30:26 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.330 16:30:26 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2653529 00:17:07.589 16:30:26 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:07.589 16:30:26 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:07.589 16:30:26 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2653529' 00:17:07.589 killing process with pid 2653529 00:17:07.589 16:30:26 -- common/autotest_common.sh@965 -- # kill 2653529 00:17:07.589 16:30:26 -- common/autotest_common.sh@970 -- # wait 2653529 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.589 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:07.590 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:17:10.119 16:30:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:17:10.119 16:30:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:17:10.119 16:30:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:17:10.119 16:30:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:17:10.119 16:30:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:17:10.119 16:30:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:10.119 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:17:10.119 16:30:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:17:10.119 16:30:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:17:10.119 16:30:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:10.119 16:30:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.119 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:17:10.119 ************************************ 00:17:10.119 START TEST env 00:17:10.119 ************************************ 00:17:10.119 16:30:29 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:17:10.119 * Looking for test storage... 00:17:10.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:17:10.119 16:30:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:17:10.119 16:30:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:10.119 16:30:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.119 16:30:29 env -- common/autotest_common.sh@10 -- # set +x 00:17:10.119 ************************************ 00:17:10.119 START TEST env_memory 00:17:10.119 ************************************ 00:17:10.119 16:30:29 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:17:10.119 00:17:10.119 00:17:10.119 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.119 http://cunit.sourceforge.net/ 00:17:10.119 00:17:10.119 00:17:10.119 Suite: memory 00:17:10.378 Test: alloc and free memory map ...[2024-07-22 16:30:29.784412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:10.378 passed 00:17:10.378 Test: mem map translation ...[2024-07-22 16:30:29.805864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:10.378 [2024-07-22 16:30:29.805887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:10.378 [2024-07-22 16:30:29.805946] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:10.378 [2024-07-22 16:30:29.805977] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:10.378 passed 00:17:10.378 Test: mem map registration ...[2024-07-22 16:30:29.849693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:17:10.378 [2024-07-22 16:30:29.849715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:17:10.378 passed 00:17:10.378 Test: mem map adjacent registrations ...passed 00:17:10.378 00:17:10.378 Run Summary: Type Total Ran Passed Failed Inactive 00:17:10.378 suites 1 1 n/a 0 0 00:17:10.378 tests 4 4 4 0 0 00:17:10.378 asserts 152 152 152 0 n/a 00:17:10.378 00:17:10.378 Elapsed time = 0.147 seconds 00:17:10.378 00:17:10.378 real 0m0.154s 00:17:10.378 user 0m0.145s 00:17:10.378 sys 0m0.008s 00:17:10.378 16:30:29 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.378 16:30:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 ************************************ 00:17:10.378 END TEST env_memory 00:17:10.378 ************************************ 00:17:10.378 16:30:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:17:10.378 16:30:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:10.378 16:30:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.378 16:30:29 env -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 ************************************ 00:17:10.378 START TEST env_vtophys 00:17:10.378 ************************************ 00:17:10.378 16:30:29 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:17:10.378 EAL: lib.eal log level changed from notice to debug 00:17:10.378 EAL: Detected lcore 0 as core 0 on socket 0 00:17:10.378 EAL: Detected lcore 1 as core 1 on socket 0 00:17:10.378 EAL: Detected lcore 2 as core 2 on socket 0 00:17:10.378 EAL: Detected lcore 3 as core 3 on socket 0 00:17:10.378 EAL: Detected lcore 4 as core 4 on socket 0 00:17:10.378 EAL: Detected lcore 5 as core 5 on socket 0 00:17:10.378 EAL: Detected lcore 6 as core 8 on socket 0 00:17:10.378 EAL: Detected lcore 7 as core 9 on socket 0 00:17:10.378 EAL: Detected lcore 8 as core 10 on socket 0 00:17:10.378 EAL: Detected lcore 9 as core 11 on socket 0 00:17:10.378 EAL: Detected lcore 10 as core 12 on socket 0 00:17:10.378 EAL: Detected lcore 11 as core 13 on socket 0 00:17:10.378 EAL: Detected lcore 12 as core 0 on socket 1 00:17:10.378 EAL: Detected lcore 13 as core 1 on socket 1 00:17:10.378 EAL: Detected lcore 14 as core 2 on socket 1 00:17:10.378 EAL: Detected lcore 15 as core 3 on socket 1 00:17:10.378 EAL: Detected lcore 16 as core 4 on socket 1 00:17:10.378 EAL: Detected lcore 17 as core 5 on socket 1 00:17:10.378 EAL: Detected lcore 18 as core 8 on socket 1 00:17:10.378 EAL: Detected lcore 19 as core 9 on socket 1 00:17:10.378 EAL: Detected lcore 20 as core 10 on socket 1 00:17:10.378 EAL: Detected lcore 21 as core 11 on socket 1 00:17:10.378 EAL: Detected lcore 22 as core 12 on socket 1 00:17:10.378 EAL: Detected lcore 23 as core 13 on socket 1 00:17:10.378 EAL: Detected lcore 24 as core 0 on socket 0 00:17:10.378 EAL: Detected lcore 25 as core 1 on socket 0 00:17:10.378 EAL: Detected lcore 26 as core 2 on socket 0 00:17:10.378 EAL: Detected lcore 27 as core 3 on socket 0 00:17:10.378 EAL: Detected lcore 28 as core 4 on socket 0 00:17:10.378 EAL: Detected lcore 29 as core 5 on socket 0 00:17:10.378 EAL: Detected lcore 30 as core 8 on socket 0 00:17:10.378 EAL: Detected lcore 31 as core 9 on socket 0 00:17:10.378 EAL: Detected lcore 32 as core 10 on socket 0 00:17:10.378 EAL: Detected lcore 33 as core 11 on socket 0 00:17:10.378 EAL: Detected lcore 34 as core 12 on socket 0 00:17:10.378 EAL: Detected lcore 35 as core 13 on socket 0 00:17:10.378 EAL: Detected lcore 36 as core 0 on socket 1 00:17:10.378 EAL: Detected lcore 37 as core 1 on socket 1 00:17:10.378 EAL: Detected lcore 38 as core 2 on socket 1 00:17:10.378 EAL: Detected lcore 39 as core 3 on socket 1 00:17:10.378 EAL: Detected lcore 40 as core 4 on socket 1 00:17:10.378 EAL: Detected lcore 41 as core 5 on socket 1 00:17:10.378 EAL: Detected lcore 42 as core 8 on socket 1 00:17:10.378 EAL: Detected lcore 43 as core 9 on socket 1 00:17:10.378 EAL: Detected lcore 44 as core 10 on socket 1 00:17:10.378 EAL: Detected lcore 45 as core 11 on socket 1 00:17:10.378 EAL: Detected lcore 46 as core 12 on socket 1 00:17:10.378 EAL: Detected lcore 47 as core 13 on socket 1 00:17:10.378 EAL: Maximum logical cores by configuration: 128 00:17:10.378 EAL: Detected CPU lcores: 48 00:17:10.378 EAL: Detected NUMA nodes: 2 00:17:10.378 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:17:10.378 EAL: Detected shared linkage of DPDK 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:17:10.378 EAL: Registered [vdev] bus. 00:17:10.378 EAL: bus.vdev log level changed from disabled to notice 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:17:10.378 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:17:10.378 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:17:10.378 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:17:10.378 EAL: No shared files mode enabled, IPC will be disabled 00:17:10.378 EAL: No shared files mode enabled, IPC is disabled 00:17:10.378 EAL: Bus pci wants IOVA as 'DC' 00:17:10.378 EAL: Bus vdev wants IOVA as 'DC' 00:17:10.378 EAL: Buses did not request a specific IOVA mode. 00:17:10.378 EAL: IOMMU is available, selecting IOVA as VA mode. 00:17:10.378 EAL: Selected IOVA mode 'VA' 00:17:10.378 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.378 EAL: Probing VFIO support... 00:17:10.378 EAL: IOMMU type 1 (Type 1) is supported 00:17:10.378 EAL: IOMMU type 7 (sPAPR) is not supported 00:17:10.378 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:17:10.378 EAL: VFIO support initialized 00:17:10.378 EAL: Ask a virtual area of 0x2e000 bytes 00:17:10.378 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:10.378 EAL: Setting up physically contiguous memory... 00:17:10.378 EAL: Setting maximum number of open files to 524288 00:17:10.378 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:10.378 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:17:10.378 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:10.378 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:10.379 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:17:10.379 EAL: Ask a virtual area of 0x61000 bytes 00:17:10.379 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:17:10.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:17:10.379 EAL: Ask a virtual area of 0x400000000 bytes 00:17:10.379 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:17:10.379 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:17:10.379 EAL: Hugepages will be freed exactly as allocated. 00:17:10.379 EAL: No shared files mode enabled, IPC is disabled 00:17:10.379 EAL: No shared files mode enabled, IPC is disabled 00:17:10.379 EAL: TSC frequency is ~2700000 KHz 00:17:10.379 EAL: Main lcore 0 is ready (tid=7fe699164a00;cpuset=[0]) 00:17:10.379 EAL: Trying to obtain current memory policy. 00:17:10.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.379 EAL: Restoring previous memory policy: 0 00:17:10.379 EAL: request: mp_malloc_sync 00:17:10.379 EAL: No shared files mode enabled, IPC is disabled 00:17:10.379 EAL: Heap on socket 0 was expanded by 2MB 00:17:10.379 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:10.637 EAL: Mem event callback 'spdk:(nil)' registered 00:17:10.637 00:17:10.637 00:17:10.637 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.637 http://cunit.sourceforge.net/ 00:17:10.637 00:17:10.637 00:17:10.637 Suite: components_suite 00:17:10.637 Test: vtophys_malloc_test ...passed 00:17:10.637 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:10.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.637 EAL: Restoring previous memory policy: 4 00:17:10.637 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.637 EAL: request: mp_malloc_sync 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: Heap on socket 0 was expanded by 4MB 00:17:10.637 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.637 EAL: request: mp_malloc_sync 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: Heap on socket 0 was shrunk by 4MB 00:17:10.637 EAL: Trying to obtain current memory policy. 00:17:10.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.637 EAL: Restoring previous memory policy: 4 00:17:10.637 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.637 EAL: request: mp_malloc_sync 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: Heap on socket 0 was expanded by 6MB 00:17:10.637 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.637 EAL: request: mp_malloc_sync 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: Heap on socket 0 was shrunk by 6MB 00:17:10.637 EAL: Trying to obtain current memory policy. 00:17:10.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.637 EAL: Restoring previous memory policy: 4 00:17:10.637 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.637 EAL: request: mp_malloc_sync 00:17:10.637 EAL: No shared files mode enabled, IPC is disabled 00:17:10.637 EAL: Heap on socket 0 was expanded by 10MB 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was shrunk by 10MB 00:17:10.638 EAL: Trying to obtain current memory policy. 00:17:10.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.638 EAL: Restoring previous memory policy: 4 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was expanded by 18MB 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was shrunk by 18MB 00:17:10.638 EAL: Trying to obtain current memory policy. 00:17:10.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.638 EAL: Restoring previous memory policy: 4 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was expanded by 34MB 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was shrunk by 34MB 00:17:10.638 EAL: Trying to obtain current memory policy. 00:17:10.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.638 EAL: Restoring previous memory policy: 4 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was expanded by 66MB 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was shrunk by 66MB 00:17:10.638 EAL: Trying to obtain current memory policy. 00:17:10.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.638 EAL: Restoring previous memory policy: 4 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was expanded by 130MB 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was shrunk by 130MB 00:17:10.638 EAL: Trying to obtain current memory policy. 00:17:10.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.638 EAL: Restoring previous memory policy: 4 00:17:10.638 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.638 EAL: request: mp_malloc_sync 00:17:10.638 EAL: No shared files mode enabled, IPC is disabled 00:17:10.638 EAL: Heap on socket 0 was expanded by 258MB 00:17:10.896 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.896 EAL: request: mp_malloc_sync 00:17:10.896 EAL: No shared files mode enabled, IPC is disabled 00:17:10.896 EAL: Heap on socket 0 was shrunk by 258MB 00:17:10.896 EAL: Trying to obtain current memory policy. 00:17:10.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:10.896 EAL: Restoring previous memory policy: 4 00:17:10.896 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.896 EAL: request: mp_malloc_sync 00:17:10.896 EAL: No shared files mode enabled, IPC is disabled 00:17:10.896 EAL: Heap on socket 0 was expanded by 514MB 00:17:11.154 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.154 EAL: request: mp_malloc_sync 00:17:11.154 EAL: No shared files mode enabled, IPC is disabled 00:17:11.154 EAL: Heap on socket 0 was shrunk by 514MB 00:17:11.154 EAL: Trying to obtain current memory policy. 00:17:11.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.412 EAL: Restoring previous memory policy: 4 00:17:11.412 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.412 EAL: request: mp_malloc_sync 00:17:11.412 EAL: No shared files mode enabled, IPC is disabled 00:17:11.412 EAL: Heap on socket 0 was expanded by 1026MB 00:17:11.670 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.929 EAL: request: mp_malloc_sync 00:17:11.929 EAL: No shared files mode enabled, IPC is disabled 00:17:11.929 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:11.929 passed 00:17:11.929 00:17:11.929 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.929 suites 1 1 n/a 0 0 00:17:11.929 tests 2 2 2 0 0 00:17:11.929 asserts 497 497 497 0 n/a 00:17:11.929 00:17:11.929 Elapsed time = 1.373 seconds 00:17:11.929 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.929 EAL: request: mp_malloc_sync 00:17:11.929 EAL: No shared files mode enabled, IPC is disabled 00:17:11.929 EAL: Heap on socket 0 was shrunk by 2MB 00:17:11.929 EAL: No shared files mode enabled, IPC is disabled 00:17:11.929 EAL: No shared files mode enabled, IPC is disabled 00:17:11.929 EAL: No shared files mode enabled, IPC is disabled 00:17:11.929 00:17:11.929 real 0m1.507s 00:17:11.929 user 0m0.860s 00:17:11.929 sys 0m0.608s 00:17:11.929 16:30:31 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.929 16:30:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:11.929 ************************************ 00:17:11.929 END TEST env_vtophys 00:17:11.929 ************************************ 00:17:11.929 16:30:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:17:11.929 16:30:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:11.929 16:30:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.929 16:30:31 env -- common/autotest_common.sh@10 -- # set +x 00:17:11.929 ************************************ 00:17:11.929 START TEST env_pci 00:17:11.929 ************************************ 00:17:11.929 16:30:31 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:17:11.929 00:17:11.929 00:17:11.929 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.929 http://cunit.sourceforge.net/ 00:17:11.929 00:17:11.929 00:17:11.929 Suite: pci 00:17:11.929 Test: pci_hook ...[2024-07-22 16:30:31.512281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2654552 has claimed it 00:17:11.929 EAL: Cannot find device (10000:00:01.0) 00:17:11.929 EAL: Failed to attach device on primary process 00:17:11.929 passed 00:17:11.929 00:17:11.929 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.929 suites 1 1 n/a 0 0 00:17:11.929 tests 1 1 1 0 0 00:17:11.929 asserts 25 25 25 0 n/a 00:17:11.929 00:17:11.929 Elapsed time = 0.026 seconds 00:17:11.929 00:17:11.929 real 0m0.038s 00:17:11.929 user 0m0.012s 00:17:11.929 sys 0m0.026s 00:17:11.929 16:30:31 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.929 16:30:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:11.929 ************************************ 00:17:11.929 END TEST env_pci 00:17:11.929 ************************************ 00:17:11.929 16:30:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:11.929 16:30:31 env -- env/env.sh@15 -- # uname 00:17:11.929 16:30:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:11.929 16:30:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:11.929 16:30:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:11.929 16:30:31 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:11.929 16:30:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.929 16:30:31 env -- common/autotest_common.sh@10 -- # set +x 00:17:12.188 ************************************ 00:17:12.188 START TEST env_dpdk_post_init 00:17:12.188 ************************************ 00:17:12.188 16:30:31 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:12.188 EAL: Detected CPU lcores: 48 00:17:12.188 EAL: Detected NUMA nodes: 2 00:17:12.188 EAL: Detected shared linkage of DPDK 00:17:12.188 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:12.188 EAL: Selected IOVA mode 'VA' 00:17:12.188 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.188 EAL: VFIO support initialized 00:17:12.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:12.188 EAL: Using IOMMU type 1 (Type 1) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:17:12.188 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:17:12.446 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:17:13.013 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:17:17.198 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:17:17.198 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:17:17.198 Starting DPDK initialization... 00:17:17.198 Starting SPDK post initialization... 00:17:17.198 SPDK NVMe probe 00:17:17.198 Attaching to 0000:81:00.0 00:17:17.198 Attached to 0000:81:00.0 00:17:17.198 Cleaning up... 00:17:17.198 00:17:17.198 real 0m5.199s 00:17:17.198 user 0m3.962s 00:17:17.198 sys 0m0.291s 00:17:17.198 16:30:36 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.198 16:30:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.198 ************************************ 00:17:17.198 END TEST env_dpdk_post_init 00:17:17.198 ************************************ 00:17:17.198 16:30:36 env -- env/env.sh@26 -- # uname 00:17:17.198 16:30:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:17.198 16:30:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:17:17.198 16:30:36 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.198 16:30:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.198 16:30:36 env -- common/autotest_common.sh@10 -- # set +x 00:17:17.198 ************************************ 00:17:17.198 START TEST env_mem_callbacks 00:17:17.198 ************************************ 00:17:17.198 16:30:36 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:17:17.198 EAL: Detected CPU lcores: 48 00:17:17.198 EAL: Detected NUMA nodes: 2 00:17:17.198 EAL: Detected shared linkage of DPDK 00:17:17.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:17.457 EAL: Selected IOVA mode 'VA' 00:17:17.457 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.457 EAL: VFIO support initialized 00:17:17.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:17.457 00:17:17.457 00:17:17.457 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.457 http://cunit.sourceforge.net/ 00:17:17.457 00:17:17.457 00:17:17.457 Suite: memory 00:17:17.457 Test: test ... 00:17:17.457 register 0x200000200000 2097152 00:17:17.457 malloc 3145728 00:17:17.457 register 0x200000400000 4194304 00:17:17.457 buf 0x200000500000 len 3145728 PASSED 00:17:17.457 malloc 64 00:17:17.457 buf 0x2000004fff40 len 64 PASSED 00:17:17.457 malloc 4194304 00:17:17.457 register 0x200000800000 6291456 00:17:17.457 buf 0x200000a00000 len 4194304 PASSED 00:17:17.457 free 0x200000500000 3145728 00:17:17.457 free 0x2000004fff40 64 00:17:17.457 unregister 0x200000400000 4194304 PASSED 00:17:17.457 free 0x200000a00000 4194304 00:17:17.457 unregister 0x200000800000 6291456 PASSED 00:17:17.457 malloc 8388608 00:17:17.457 register 0x200000400000 10485760 00:17:17.457 buf 0x200000600000 len 8388608 PASSED 00:17:17.457 free 0x200000600000 8388608 00:17:17.457 unregister 0x200000400000 10485760 PASSED 00:17:17.457 passed 00:17:17.457 00:17:17.457 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.457 suites 1 1 n/a 0 0 00:17:17.457 tests 1 1 1 0 0 00:17:17.457 asserts 15 15 15 0 n/a 00:17:17.457 00:17:17.457 Elapsed time = 0.005 seconds 00:17:17.457 00:17:17.457 real 0m0.054s 00:17:17.457 user 0m0.012s 00:17:17.457 sys 0m0.041s 00:17:17.457 16:30:36 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.457 16:30:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:17.457 ************************************ 00:17:17.457 END TEST env_mem_callbacks 00:17:17.457 ************************************ 00:17:17.457 00:17:17.457 real 0m7.228s 00:17:17.457 user 0m5.102s 00:17:17.457 sys 0m1.159s 00:17:17.457 16:30:36 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:17.457 16:30:36 env -- common/autotest_common.sh@10 -- # set +x 00:17:17.457 ************************************ 00:17:17.457 END TEST env 00:17:17.457 ************************************ 00:17:17.457 16:30:36 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:17:17.457 16:30:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.457 16:30:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.457 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:17:17.457 ************************************ 00:17:17.457 START TEST rpc 00:17:17.457 ************************************ 00:17:17.457 16:30:36 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:17:17.457 * Looking for test storage... 00:17:17.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:17:17.457 16:30:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2655335 00:17:17.457 16:30:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:17:17.457 16:30:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:17.457 16:30:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2655335 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@827 -- # '[' -z 2655335 ']' 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.457 16:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.457 [2024-07-22 16:30:37.055405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.457 [2024-07-22 16:30:37.055496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655335 ] 00:17:17.457 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.716 [2024-07-22 16:30:37.121249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.716 [2024-07-22 16:30:37.204908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:17.716 [2024-07-22 16:30:37.204982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2655335' to capture a snapshot of events at runtime. 00:17:17.716 [2024-07-22 16:30:37.204998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.716 [2024-07-22 16:30:37.205010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.716 [2024-07-22 16:30:37.205035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2655335 for offline analysis/debug. 00:17:17.716 [2024-07-22 16:30:37.205063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.975 16:30:37 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.975 16:30:37 rpc -- common/autotest_common.sh@860 -- # return 0 00:17:17.975 16:30:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:17:17.975 16:30:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:17:17.975 16:30:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:17.975 16:30:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:17.975 16:30:37 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:17.975 16:30:37 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:17.975 16:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 ************************************ 00:17:17.975 START TEST rpc_integrity 00:17:17.975 ************************************ 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.975 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:17.975 { 00:17:17.975 "name": "Malloc0", 00:17:17.975 "aliases": [ 00:17:17.975 "83df8719-83c6-4ed5-9d0c-1b138e758325" 00:17:17.975 ], 00:17:17.975 "product_name": "Malloc disk", 00:17:17.975 "block_size": 512, 00:17:17.975 "num_blocks": 16384, 00:17:17.975 "uuid": "83df8719-83c6-4ed5-9d0c-1b138e758325", 00:17:17.975 "assigned_rate_limits": { 00:17:17.975 "rw_ios_per_sec": 0, 00:17:17.975 "rw_mbytes_per_sec": 0, 00:17:17.975 "r_mbytes_per_sec": 0, 00:17:17.975 "w_mbytes_per_sec": 0 00:17:17.975 }, 00:17:17.976 "claimed": false, 00:17:17.976 "zoned": false, 00:17:17.976 "supported_io_types": { 00:17:17.976 "read": true, 00:17:17.976 "write": true, 00:17:17.976 "unmap": true, 00:17:17.976 "write_zeroes": true, 00:17:17.976 "flush": true, 00:17:17.976 "reset": true, 00:17:17.976 "compare": false, 00:17:17.976 "compare_and_write": false, 00:17:17.976 "abort": true, 00:17:17.976 "nvme_admin": false, 00:17:17.976 "nvme_io": false 00:17:17.976 }, 00:17:17.976 "memory_domains": [ 00:17:17.976 { 00:17:17.976 "dma_device_id": "system", 00:17:17.976 "dma_device_type": 1 00:17:17.976 }, 00:17:17.976 { 00:17:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.976 "dma_device_type": 2 00:17:17.976 } 00:17:17.976 ], 00:17:17.976 "driver_specific": {} 00:17:17.976 } 00:17:17.976 ]' 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 [2024-07-22 16:30:37.588390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:17.976 [2024-07-22 16:30:37.588435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.976 [2024-07-22 16:30:37.588458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa318f0 00:17:17.976 [2024-07-22 16:30:37.588473] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.976 [2024-07-22 16:30:37.590121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.976 [2024-07-22 16:30:37.590147] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:17.976 Passthru0 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:17.976 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:17.976 { 00:17:17.976 "name": "Malloc0", 00:17:17.976 "aliases": [ 00:17:17.976 "83df8719-83c6-4ed5-9d0c-1b138e758325" 00:17:17.976 ], 00:17:17.976 "product_name": "Malloc disk", 00:17:17.976 "block_size": 512, 00:17:17.976 "num_blocks": 16384, 00:17:17.976 "uuid": "83df8719-83c6-4ed5-9d0c-1b138e758325", 00:17:17.976 "assigned_rate_limits": { 00:17:17.976 "rw_ios_per_sec": 0, 00:17:17.976 "rw_mbytes_per_sec": 0, 00:17:17.976 "r_mbytes_per_sec": 0, 00:17:17.976 "w_mbytes_per_sec": 0 00:17:17.976 }, 00:17:17.976 "claimed": true, 00:17:17.976 "claim_type": "exclusive_write", 00:17:17.976 "zoned": false, 00:17:17.976 "supported_io_types": { 00:17:17.976 "read": true, 00:17:17.976 "write": true, 00:17:17.976 "unmap": true, 00:17:17.976 "write_zeroes": true, 00:17:17.976 "flush": true, 00:17:17.976 "reset": true, 00:17:17.976 "compare": false, 00:17:17.976 "compare_and_write": false, 00:17:17.976 "abort": true, 00:17:17.976 "nvme_admin": false, 00:17:17.976 "nvme_io": false 00:17:17.976 }, 00:17:17.976 "memory_domains": [ 00:17:17.976 { 00:17:17.976 "dma_device_id": "system", 00:17:17.976 "dma_device_type": 1 00:17:17.976 }, 00:17:17.976 { 00:17:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.976 "dma_device_type": 2 00:17:17.976 } 00:17:17.976 ], 00:17:17.976 "driver_specific": {} 00:17:17.976 }, 00:17:17.976 { 00:17:17.976 "name": "Passthru0", 00:17:17.976 "aliases": [ 00:17:17.976 "a7f3121b-4cfb-50b0-9d1f-b891108aa27f" 00:17:17.976 ], 00:17:17.976 "product_name": "passthru", 00:17:17.976 "block_size": 512, 00:17:17.976 "num_blocks": 16384, 00:17:17.976 "uuid": "a7f3121b-4cfb-50b0-9d1f-b891108aa27f", 00:17:17.976 "assigned_rate_limits": { 00:17:17.976 "rw_ios_per_sec": 0, 00:17:17.976 "rw_mbytes_per_sec": 0, 00:17:17.976 "r_mbytes_per_sec": 0, 00:17:17.976 "w_mbytes_per_sec": 0 00:17:17.976 }, 00:17:17.976 "claimed": false, 00:17:17.976 "zoned": false, 00:17:17.976 "supported_io_types": { 00:17:17.976 "read": true, 00:17:17.976 "write": true, 00:17:17.976 "unmap": true, 00:17:17.976 "write_zeroes": true, 00:17:17.976 "flush": true, 00:17:17.976 "reset": true, 00:17:17.976 "compare": false, 00:17:17.976 "compare_and_write": false, 00:17:17.976 "abort": true, 00:17:17.976 "nvme_admin": false, 00:17:17.976 "nvme_io": false 00:17:17.976 }, 00:17:17.976 "memory_domains": [ 00:17:17.976 { 00:17:17.976 "dma_device_id": "system", 00:17:17.976 "dma_device_type": 1 00:17:17.976 }, 00:17:17.976 { 00:17:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.976 "dma_device_type": 2 00:17:17.976 } 00:17:17.976 ], 00:17:17.976 "driver_specific": { 00:17:17.976 "passthru": { 00:17:17.976 "name": "Passthru0", 00:17:17.976 "base_bdev_name": "Malloc0" 00:17:17.976 } 00:17:17.976 } 00:17:17.976 } 00:17:17.976 ]' 00:17:17.976 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:18.234 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:18.234 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.234 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.234 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.234 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.235 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:18.235 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:18.235 16:30:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:18.235 00:17:18.235 real 0m0.226s 00:17:18.235 user 0m0.148s 00:17:18.235 sys 0m0.020s 00:17:18.235 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 ************************************ 00:17:18.235 END TEST rpc_integrity 00:17:18.235 ************************************ 00:17:18.235 16:30:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 ************************************ 00:17:18.235 START TEST rpc_plugins 00:17:18.235 ************************************ 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:18.235 { 00:17:18.235 "name": "Malloc1", 00:17:18.235 "aliases": [ 00:17:18.235 "584f9fab-7668-418e-a860-28863f515cc2" 00:17:18.235 ], 00:17:18.235 "product_name": "Malloc disk", 00:17:18.235 "block_size": 4096, 00:17:18.235 "num_blocks": 256, 00:17:18.235 "uuid": "584f9fab-7668-418e-a860-28863f515cc2", 00:17:18.235 "assigned_rate_limits": { 00:17:18.235 "rw_ios_per_sec": 0, 00:17:18.235 "rw_mbytes_per_sec": 0, 00:17:18.235 "r_mbytes_per_sec": 0, 00:17:18.235 "w_mbytes_per_sec": 0 00:17:18.235 }, 00:17:18.235 "claimed": false, 00:17:18.235 "zoned": false, 00:17:18.235 "supported_io_types": { 00:17:18.235 "read": true, 00:17:18.235 "write": true, 00:17:18.235 "unmap": true, 00:17:18.235 "write_zeroes": true, 00:17:18.235 "flush": true, 00:17:18.235 "reset": true, 00:17:18.235 "compare": false, 00:17:18.235 "compare_and_write": false, 00:17:18.235 "abort": true, 00:17:18.235 "nvme_admin": false, 00:17:18.235 "nvme_io": false 00:17:18.235 }, 00:17:18.235 "memory_domains": [ 00:17:18.235 { 00:17:18.235 "dma_device_id": "system", 00:17:18.235 "dma_device_type": 1 00:17:18.235 }, 00:17:18.235 { 00:17:18.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.235 "dma_device_type": 2 00:17:18.235 } 00:17:18.235 ], 00:17:18.235 "driver_specific": {} 00:17:18.235 } 00:17:18.235 ]' 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:18.235 16:30:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:18.235 00:17:18.235 real 0m0.111s 00:17:18.235 user 0m0.074s 00:17:18.235 sys 0m0.007s 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.235 16:30:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 ************************************ 00:17:18.235 END TEST rpc_plugins 00:17:18.235 ************************************ 00:17:18.235 16:30:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.235 16:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.494 ************************************ 00:17:18.494 START TEST rpc_trace_cmd_test 00:17:18.494 ************************************ 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:18.494 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2655335", 00:17:18.494 "tpoint_group_mask": "0x8", 00:17:18.494 "iscsi_conn": { 00:17:18.494 "mask": "0x2", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "scsi": { 00:17:18.494 "mask": "0x4", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "bdev": { 00:17:18.494 "mask": "0x8", 00:17:18.494 "tpoint_mask": "0xffffffffffffffff" 00:17:18.494 }, 00:17:18.494 "nvmf_rdma": { 00:17:18.494 "mask": "0x10", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "nvmf_tcp": { 00:17:18.494 "mask": "0x20", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "ftl": { 00:17:18.494 "mask": "0x40", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "blobfs": { 00:17:18.494 "mask": "0x80", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "dsa": { 00:17:18.494 "mask": "0x200", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "thread": { 00:17:18.494 "mask": "0x400", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "nvme_pcie": { 00:17:18.494 "mask": "0x800", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "iaa": { 00:17:18.494 "mask": "0x1000", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "nvme_tcp": { 00:17:18.494 "mask": "0x2000", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "bdev_nvme": { 00:17:18.494 "mask": "0x4000", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 }, 00:17:18.494 "sock": { 00:17:18.494 "mask": "0x8000", 00:17:18.494 "tpoint_mask": "0x0" 00:17:18.494 } 00:17:18.494 }' 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:18.494 16:30:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:18.494 00:17:18.494 real 0m0.194s 00:17:18.494 user 0m0.171s 00:17:18.494 sys 0m0.017s 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.494 16:30:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.494 ************************************ 00:17:18.494 END TEST rpc_trace_cmd_test 00:17:18.494 ************************************ 00:17:18.494 16:30:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:18.494 16:30:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:18.494 16:30:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:18.494 16:30:38 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:18.494 16:30:38 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.494 16:30:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 ************************************ 00:17:18.752 START TEST rpc_daemon_integrity 00:17:18.752 ************************************ 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.752 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:18.752 { 00:17:18.753 "name": "Malloc2", 00:17:18.753 "aliases": [ 00:17:18.753 "886def59-32b8-4843-8728-c72f11de7fd5" 00:17:18.753 ], 00:17:18.753 "product_name": "Malloc disk", 00:17:18.753 "block_size": 512, 00:17:18.753 "num_blocks": 16384, 00:17:18.753 "uuid": "886def59-32b8-4843-8728-c72f11de7fd5", 00:17:18.753 "assigned_rate_limits": { 00:17:18.753 "rw_ios_per_sec": 0, 00:17:18.753 "rw_mbytes_per_sec": 0, 00:17:18.753 "r_mbytes_per_sec": 0, 00:17:18.753 "w_mbytes_per_sec": 0 00:17:18.753 }, 00:17:18.753 "claimed": false, 00:17:18.753 "zoned": false, 00:17:18.753 "supported_io_types": { 00:17:18.753 "read": true, 00:17:18.753 "write": true, 00:17:18.753 "unmap": true, 00:17:18.753 "write_zeroes": true, 00:17:18.753 "flush": true, 00:17:18.753 "reset": true, 00:17:18.753 "compare": false, 00:17:18.753 "compare_and_write": false, 00:17:18.753 "abort": true, 00:17:18.753 "nvme_admin": false, 00:17:18.753 "nvme_io": false 00:17:18.753 }, 00:17:18.753 "memory_domains": [ 00:17:18.753 { 00:17:18.753 "dma_device_id": "system", 00:17:18.753 "dma_device_type": 1 00:17:18.753 }, 00:17:18.753 { 00:17:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.753 "dma_device_type": 2 00:17:18.753 } 00:17:18.753 ], 00:17:18.753 "driver_specific": {} 00:17:18.753 } 00:17:18.753 ]' 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 [2024-07-22 16:30:38.254269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:18.753 [2024-07-22 16:30:38.254305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.753 [2024-07-22 16:30:38.254345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x92c600 00:17:18.753 [2024-07-22 16:30:38.254359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.753 [2024-07-22 16:30:38.255795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.753 [2024-07-22 16:30:38.255824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:18.753 Passthru0 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:18.753 { 00:17:18.753 "name": "Malloc2", 00:17:18.753 "aliases": [ 00:17:18.753 "886def59-32b8-4843-8728-c72f11de7fd5" 00:17:18.753 ], 00:17:18.753 "product_name": "Malloc disk", 00:17:18.753 "block_size": 512, 00:17:18.753 "num_blocks": 16384, 00:17:18.753 "uuid": "886def59-32b8-4843-8728-c72f11de7fd5", 00:17:18.753 "assigned_rate_limits": { 00:17:18.753 "rw_ios_per_sec": 0, 00:17:18.753 "rw_mbytes_per_sec": 0, 00:17:18.753 "r_mbytes_per_sec": 0, 00:17:18.753 "w_mbytes_per_sec": 0 00:17:18.753 }, 00:17:18.753 "claimed": true, 00:17:18.753 "claim_type": "exclusive_write", 00:17:18.753 "zoned": false, 00:17:18.753 "supported_io_types": { 00:17:18.753 "read": true, 00:17:18.753 "write": true, 00:17:18.753 "unmap": true, 00:17:18.753 "write_zeroes": true, 00:17:18.753 "flush": true, 00:17:18.753 "reset": true, 00:17:18.753 "compare": false, 00:17:18.753 "compare_and_write": false, 00:17:18.753 "abort": true, 00:17:18.753 "nvme_admin": false, 00:17:18.753 "nvme_io": false 00:17:18.753 }, 00:17:18.753 "memory_domains": [ 00:17:18.753 { 00:17:18.753 "dma_device_id": "system", 00:17:18.753 "dma_device_type": 1 00:17:18.753 }, 00:17:18.753 { 00:17:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.753 "dma_device_type": 2 00:17:18.753 } 00:17:18.753 ], 00:17:18.753 "driver_specific": {} 00:17:18.753 }, 00:17:18.753 { 00:17:18.753 "name": "Passthru0", 00:17:18.753 "aliases": [ 00:17:18.753 "57a37e64-8bee-534a-b67b-bfaa2647dc95" 00:17:18.753 ], 00:17:18.753 "product_name": "passthru", 00:17:18.753 "block_size": 512, 00:17:18.753 "num_blocks": 16384, 00:17:18.753 "uuid": "57a37e64-8bee-534a-b67b-bfaa2647dc95", 00:17:18.753 "assigned_rate_limits": { 00:17:18.753 "rw_ios_per_sec": 0, 00:17:18.753 "rw_mbytes_per_sec": 0, 00:17:18.753 "r_mbytes_per_sec": 0, 00:17:18.753 "w_mbytes_per_sec": 0 00:17:18.753 }, 00:17:18.753 "claimed": false, 00:17:18.753 "zoned": false, 00:17:18.753 "supported_io_types": { 00:17:18.753 "read": true, 00:17:18.753 "write": true, 00:17:18.753 "unmap": true, 00:17:18.753 "write_zeroes": true, 00:17:18.753 "flush": true, 00:17:18.753 "reset": true, 00:17:18.753 "compare": false, 00:17:18.753 "compare_and_write": false, 00:17:18.753 "abort": true, 00:17:18.753 "nvme_admin": false, 00:17:18.753 "nvme_io": false 00:17:18.753 }, 00:17:18.753 "memory_domains": [ 00:17:18.753 { 00:17:18.753 "dma_device_id": "system", 00:17:18.753 "dma_device_type": 1 00:17:18.753 }, 00:17:18.753 { 00:17:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.753 "dma_device_type": 2 00:17:18.753 } 00:17:18.753 ], 00:17:18.753 "driver_specific": { 00:17:18.753 "passthru": { 00:17:18.753 "name": "Passthru0", 00:17:18.753 "base_bdev_name": "Malloc2" 00:17:18.753 } 00:17:18.753 } 00:17:18.753 } 00:17:18.753 ]' 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:18.753 00:17:18.753 real 0m0.220s 00:17:18.753 user 0m0.145s 00:17:18.753 sys 0m0.019s 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.753 16:30:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 ************************************ 00:17:18.753 END TEST rpc_daemon_integrity 00:17:18.753 ************************************ 00:17:18.753 16:30:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:18.753 16:30:38 rpc -- rpc/rpc.sh@84 -- # killprocess 2655335 00:17:18.753 16:30:38 rpc -- common/autotest_common.sh@946 -- # '[' -z 2655335 ']' 00:17:18.753 16:30:38 rpc -- common/autotest_common.sh@950 -- # kill -0 2655335 00:17:18.753 16:30:38 rpc -- common/autotest_common.sh@951 -- # uname 00:17:18.753 16:30:38 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.753 16:30:38 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2655335 00:17:19.012 16:30:38 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:19.012 16:30:38 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:19.012 16:30:38 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2655335' 00:17:19.012 killing process with pid 2655335 00:17:19.012 16:30:38 rpc -- common/autotest_common.sh@965 -- # kill 2655335 00:17:19.012 16:30:38 rpc -- common/autotest_common.sh@970 -- # wait 2655335 00:17:19.270 00:17:19.270 real 0m1.869s 00:17:19.270 user 0m2.326s 00:17:19.270 sys 0m0.601s 00:17:19.270 16:30:38 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:19.270 16:30:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.270 ************************************ 00:17:19.270 END TEST rpc 00:17:19.270 ************************************ 00:17:19.270 16:30:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:17:19.270 16:30:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:19.270 16:30:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:19.270 16:30:38 -- common/autotest_common.sh@10 -- # set +x 00:17:19.270 ************************************ 00:17:19.270 START TEST skip_rpc 00:17:19.270 ************************************ 00:17:19.270 16:30:38 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:17:19.270 * Looking for test storage... 00:17:19.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:17:19.270 16:30:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:17:19.270 16:30:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:17:19.530 16:30:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:19.530 16:30:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:19.530 16:30:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:19.530 16:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.530 ************************************ 00:17:19.530 START TEST skip_rpc 00:17:19.530 ************************************ 00:17:19.530 16:30:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:17:19.530 16:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2655769 00:17:19.530 16:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:19.530 16:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.530 16:30:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:19.530 [2024-07-22 16:30:38.990634] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:19.530 [2024-07-22 16:30:38.990735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655769 ] 00:17:19.530 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.530 [2024-07-22 16:30:39.062666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.530 [2024-07-22 16:30:39.152910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2655769 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2655769 ']' 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2655769 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2655769 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2655769' 00:17:24.797 killing process with pid 2655769 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2655769 00:17:24.797 16:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2655769 00:17:24.797 00:17:24.797 real 0m5.440s 00:17:24.797 user 0m5.104s 00:17:24.797 sys 0m0.342s 00:17:24.797 16:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.797 16:30:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.797 ************************************ 00:17:24.797 END TEST skip_rpc 00:17:24.797 ************************************ 00:17:24.797 16:30:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:24.797 16:30:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:24.797 16:30:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.797 16:30:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.797 ************************************ 00:17:24.797 START TEST skip_rpc_with_json 00:17:24.797 ************************************ 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2656456 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2656456 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2656456 ']' 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:24.797 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:25.057 [2024-07-22 16:30:44.483472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:25.057 [2024-07-22 16:30:44.483563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2656456 ] 00:17:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.057 [2024-07-22 16:30:44.554391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.057 [2024-07-22 16:30:44.644208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:25.316 [2024-07-22 16:30:44.902380] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:25.316 request: 00:17:25.316 { 00:17:25.316 "trtype": "tcp", 00:17:25.316 "method": "nvmf_get_transports", 00:17:25.316 "req_id": 1 00:17:25.316 } 00:17:25.316 Got JSON-RPC error response 00:17:25.316 response: 00:17:25.316 { 00:17:25.316 "code": -19, 00:17:25.316 "message": "No such device" 00:17:25.316 } 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:25.316 [2024-07-22 16:30:44.910501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.316 16:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:17:25.574 { 00:17:25.574 "subsystems": [ 00:17:25.574 { 00:17:25.574 "subsystem": "vfio_user_target", 00:17:25.574 "config": null 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "keyring", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "iobuf", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "iobuf_set_options", 00:17:25.574 "params": { 00:17:25.574 "small_pool_count": 8192, 00:17:25.574 "large_pool_count": 1024, 00:17:25.574 "small_bufsize": 8192, 00:17:25.574 "large_bufsize": 135168 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "sock", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "sock_set_default_impl", 00:17:25.574 "params": { 00:17:25.574 "impl_name": "posix" 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "sock_impl_set_options", 00:17:25.574 "params": { 00:17:25.574 "impl_name": "ssl", 00:17:25.574 "recv_buf_size": 4096, 00:17:25.574 "send_buf_size": 4096, 00:17:25.574 "enable_recv_pipe": true, 00:17:25.574 "enable_quickack": false, 00:17:25.574 "enable_placement_id": 0, 00:17:25.574 "enable_zerocopy_send_server": true, 00:17:25.574 "enable_zerocopy_send_client": false, 00:17:25.574 "zerocopy_threshold": 0, 00:17:25.574 "tls_version": 0, 00:17:25.574 "enable_ktls": false 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "sock_impl_set_options", 00:17:25.574 "params": { 00:17:25.574 "impl_name": "posix", 00:17:25.574 "recv_buf_size": 2097152, 00:17:25.574 "send_buf_size": 2097152, 00:17:25.574 "enable_recv_pipe": true, 00:17:25.574 "enable_quickack": false, 00:17:25.574 "enable_placement_id": 0, 00:17:25.574 "enable_zerocopy_send_server": true, 00:17:25.574 "enable_zerocopy_send_client": false, 00:17:25.574 "zerocopy_threshold": 0, 00:17:25.574 "tls_version": 0, 00:17:25.574 "enable_ktls": false 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "vmd", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "accel", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "accel_set_options", 00:17:25.574 "params": { 00:17:25.574 "small_cache_size": 128, 00:17:25.574 "large_cache_size": 16, 00:17:25.574 "task_count": 2048, 00:17:25.574 "sequence_count": 2048, 00:17:25.574 "buf_count": 2048 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "bdev", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "bdev_set_options", 00:17:25.574 "params": { 00:17:25.574 "bdev_io_pool_size": 65535, 00:17:25.574 "bdev_io_cache_size": 256, 00:17:25.574 "bdev_auto_examine": true, 00:17:25.574 "iobuf_small_cache_size": 128, 00:17:25.574 "iobuf_large_cache_size": 16 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "bdev_raid_set_options", 00:17:25.574 "params": { 00:17:25.574 "process_window_size_kb": 1024 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "bdev_iscsi_set_options", 00:17:25.574 "params": { 00:17:25.574 "timeout_sec": 30 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "bdev_nvme_set_options", 00:17:25.574 "params": { 00:17:25.574 "action_on_timeout": "none", 00:17:25.574 "timeout_us": 0, 00:17:25.574 "timeout_admin_us": 0, 00:17:25.574 "keep_alive_timeout_ms": 10000, 00:17:25.574 "arbitration_burst": 0, 00:17:25.574 "low_priority_weight": 0, 00:17:25.574 "medium_priority_weight": 0, 00:17:25.574 "high_priority_weight": 0, 00:17:25.574 "nvme_adminq_poll_period_us": 10000, 00:17:25.574 "nvme_ioq_poll_period_us": 0, 00:17:25.574 "io_queue_requests": 0, 00:17:25.574 "delay_cmd_submit": true, 00:17:25.574 "transport_retry_count": 4, 00:17:25.574 "bdev_retry_count": 3, 00:17:25.574 "transport_ack_timeout": 0, 00:17:25.574 "ctrlr_loss_timeout_sec": 0, 00:17:25.574 "reconnect_delay_sec": 0, 00:17:25.574 "fast_io_fail_timeout_sec": 0, 00:17:25.574 "disable_auto_failback": false, 00:17:25.574 "generate_uuids": false, 00:17:25.574 "transport_tos": 0, 00:17:25.574 "nvme_error_stat": false, 00:17:25.574 "rdma_srq_size": 0, 00:17:25.574 "io_path_stat": false, 00:17:25.574 "allow_accel_sequence": false, 00:17:25.574 "rdma_max_cq_size": 0, 00:17:25.574 "rdma_cm_event_timeout_ms": 0, 00:17:25.574 "dhchap_digests": [ 00:17:25.574 "sha256", 00:17:25.574 "sha384", 00:17:25.574 "sha512" 00:17:25.574 ], 00:17:25.574 "dhchap_dhgroups": [ 00:17:25.574 "null", 00:17:25.574 "ffdhe2048", 00:17:25.574 "ffdhe3072", 00:17:25.574 "ffdhe4096", 00:17:25.574 "ffdhe6144", 00:17:25.574 "ffdhe8192" 00:17:25.574 ] 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "bdev_nvme_set_hotplug", 00:17:25.574 "params": { 00:17:25.574 "period_us": 100000, 00:17:25.574 "enable": false 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "bdev_wait_for_examine" 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "scsi", 00:17:25.574 "config": null 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "scheduler", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "framework_set_scheduler", 00:17:25.574 "params": { 00:17:25.574 "name": "static" 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "vhost_scsi", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "vhost_blk", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "ublk", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "nbd", 00:17:25.574 "config": [] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "nvmf", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "nvmf_set_config", 00:17:25.574 "params": { 00:17:25.574 "discovery_filter": "match_any", 00:17:25.574 "admin_cmd_passthru": { 00:17:25.574 "identify_ctrlr": false 00:17:25.574 } 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "nvmf_set_max_subsystems", 00:17:25.574 "params": { 00:17:25.574 "max_subsystems": 1024 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "nvmf_set_crdt", 00:17:25.574 "params": { 00:17:25.574 "crdt1": 0, 00:17:25.574 "crdt2": 0, 00:17:25.574 "crdt3": 0 00:17:25.574 } 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "method": "nvmf_create_transport", 00:17:25.574 "params": { 00:17:25.574 "trtype": "TCP", 00:17:25.574 "max_queue_depth": 128, 00:17:25.574 "max_io_qpairs_per_ctrlr": 127, 00:17:25.574 "in_capsule_data_size": 4096, 00:17:25.574 "max_io_size": 131072, 00:17:25.574 "io_unit_size": 131072, 00:17:25.574 "max_aq_depth": 128, 00:17:25.574 "num_shared_buffers": 511, 00:17:25.574 "buf_cache_size": 4294967295, 00:17:25.574 "dif_insert_or_strip": false, 00:17:25.574 "zcopy": false, 00:17:25.574 "c2h_success": true, 00:17:25.574 "sock_priority": 0, 00:17:25.574 "abort_timeout_sec": 1, 00:17:25.574 "ack_timeout": 0, 00:17:25.574 "data_wr_pool_size": 0 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 }, 00:17:25.574 { 00:17:25.574 "subsystem": "iscsi", 00:17:25.574 "config": [ 00:17:25.574 { 00:17:25.574 "method": "iscsi_set_options", 00:17:25.574 "params": { 00:17:25.574 "node_base": "iqn.2016-06.io.spdk", 00:17:25.574 "max_sessions": 128, 00:17:25.574 "max_connections_per_session": 2, 00:17:25.574 "max_queue_depth": 64, 00:17:25.574 "default_time2wait": 2, 00:17:25.574 "default_time2retain": 20, 00:17:25.574 "first_burst_length": 8192, 00:17:25.574 "immediate_data": true, 00:17:25.574 "allow_duplicated_isid": false, 00:17:25.574 "error_recovery_level": 0, 00:17:25.574 "nop_timeout": 60, 00:17:25.574 "nop_in_interval": 30, 00:17:25.574 "disable_chap": false, 00:17:25.574 "require_chap": false, 00:17:25.574 "mutual_chap": false, 00:17:25.574 "chap_group": 0, 00:17:25.574 "max_large_datain_per_connection": 64, 00:17:25.574 "max_r2t_per_connection": 4, 00:17:25.574 "pdu_pool_size": 36864, 00:17:25.574 "immediate_data_pool_size": 16384, 00:17:25.574 "data_out_pool_size": 2048 00:17:25.574 } 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 } 00:17:25.574 ] 00:17:25.574 } 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2656456 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2656456 ']' 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2656456 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2656456 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2656456' 00:17:25.574 killing process with pid 2656456 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2656456 00:17:25.574 16:30:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2656456 00:17:26.141 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2656578 00:17:26.142 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:17:26.142 16:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2656578 ']' 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2656578' 00:17:31.407 killing process with pid 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2656578 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:17:31.407 00:17:31.407 real 0m6.504s 00:17:31.407 user 0m6.075s 00:17:31.407 sys 0m0.705s 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:31.407 ************************************ 00:17:31.407 END TEST skip_rpc_with_json 00:17:31.407 ************************************ 00:17:31.407 16:30:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:31.407 16:30:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:31.407 16:30:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.407 16:30:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.407 ************************************ 00:17:31.407 START TEST skip_rpc_with_delay 00:17:31.407 ************************************ 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:17:31.407 16:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:31.407 [2024-07-22 16:30:51.030885] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:31.407 [2024-07-22 16:30:51.031019] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.407 00:17:31.407 real 0m0.066s 00:17:31.407 user 0m0.040s 00:17:31.407 sys 0m0.025s 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.407 16:30:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:31.407 ************************************ 00:17:31.407 END TEST skip_rpc_with_delay 00:17:31.407 ************************************ 00:17:31.666 16:30:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:31.666 16:30:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:31.666 16:30:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:31.666 16:30:51 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:31.666 16:30:51 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.666 16:30:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.666 ************************************ 00:17:31.666 START TEST exit_on_failed_rpc_init 00:17:31.666 ************************************ 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2657314 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2657314 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2657314 ']' 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.666 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.666 [2024-07-22 16:30:51.146306] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.666 [2024-07-22 16:30:51.146412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657314 ] 00:17:31.666 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.666 [2024-07-22 16:30:51.212191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.666 [2024-07-22 16:30:51.300174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:17:31.925 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:17:32.183 [2024-07-22 16:30:51.610432] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:32.183 [2024-07-22 16:30:51.610503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657319 ] 00:17:32.183 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.183 [2024-07-22 16:30:51.681931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.183 [2024-07-22 16:30:51.776762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.183 [2024-07-22 16:30:51.776902] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:32.183 [2024-07-22 16:30:51.776924] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:32.183 [2024-07-22 16:30:51.776938] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2657314 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2657314 ']' 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2657314 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2657314 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2657314' 00:17:32.441 killing process with pid 2657314 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2657314 00:17:32.441 16:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2657314 00:17:32.700 00:17:32.700 real 0m1.206s 00:17:32.700 user 0m1.316s 00:17:32.700 sys 0m0.462s 00:17:32.700 16:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.700 16:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.700 ************************************ 00:17:32.700 END TEST exit_on_failed_rpc_init 00:17:32.700 ************************************ 00:17:32.700 16:30:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:17:32.700 00:17:32.700 real 0m13.456s 00:17:32.700 user 0m12.621s 00:17:32.700 sys 0m1.706s 00:17:32.700 16:30:52 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.700 16:30:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.700 ************************************ 00:17:32.700 END TEST skip_rpc 00:17:32.700 ************************************ 00:17:32.700 16:30:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:17:32.700 16:30:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:32.700 16:30:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.700 16:30:52 -- common/autotest_common.sh@10 -- # set +x 00:17:32.959 ************************************ 00:17:32.959 START TEST rpc_client 00:17:32.959 ************************************ 00:17:32.959 16:30:52 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:17:32.959 * Looking for test storage... 00:17:32.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:17:32.959 16:30:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:17:32.959 OK 00:17:32.959 16:30:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:32.959 00:17:32.959 real 0m0.068s 00:17:32.959 user 0m0.033s 00:17:32.959 sys 0m0.040s 00:17:32.959 16:30:52 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.959 16:30:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:32.959 ************************************ 00:17:32.959 END TEST rpc_client 00:17:32.959 ************************************ 00:17:32.959 16:30:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:17:32.959 16:30:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:32.959 16:30:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.959 16:30:52 -- common/autotest_common.sh@10 -- # set +x 00:17:32.959 ************************************ 00:17:32.959 START TEST json_config 00:17:32.959 ************************************ 00:17:32.959 16:30:52 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:17:32.959 16:30:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.959 16:30:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.959 16:30:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.959 16:30:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.959 16:30:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.959 16:30:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.959 16:30:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.960 16:30:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.960 16:30:52 json_config -- paths/export.sh@5 -- # export PATH 00:17:32.960 16:30:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@47 -- # : 0 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.960 16:30:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:17:32.960 INFO: JSON configuration test init 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:32.960 16:30:52 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:17:32.960 16:30:52 json_config -- json_config/common.sh@9 -- # local app=target 00:17:32.960 16:30:52 json_config -- json_config/common.sh@10 -- # shift 00:17:32.960 16:30:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:32.960 16:30:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:32.960 16:30:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:17:32.960 16:30:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:32.960 16:30:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:32.960 16:30:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2657563 00:17:32.960 16:30:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:17:32.960 16:30:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:32.960 Waiting for target to run... 00:17:32.960 16:30:52 json_config -- json_config/common.sh@25 -- # waitforlisten 2657563 /var/tmp/spdk_tgt.sock 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@827 -- # '[' -z 2657563 ']' 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.960 16:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:32.960 [2024-07-22 16:30:52.594724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:32.960 [2024-07-22 16:30:52.594821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657563 ] 00:17:33.219 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.476 [2024-07-22 16:30:52.943234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.476 [2024-07-22 16:30:53.006359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@860 -- # return 0 00:17:34.041 16:30:53 json_config -- json_config/common.sh@26 -- # echo '' 00:17:34.041 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.041 16:30:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:17:34.041 16:30:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:17:34.042 16:30:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:17:37.326 16:30:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@55 -- # return 0 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.326 16:30:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:17:37.326 16:30:56 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:37.326 16:30:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:37.584 MallocForNvmf0 00:17:37.584 16:30:57 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:37.584 16:30:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:37.842 MallocForNvmf1 00:17:37.842 16:30:57 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:17:37.842 16:30:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:17:38.100 [2024-07-22 16:30:57.668798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.100 16:30:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.100 16:30:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.357 16:30:57 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:38.357 16:30:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:38.615 16:30:58 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:38.615 16:30:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:38.873 16:30:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:38.873 16:30:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:39.144 [2024-07-22 16:30:58.640087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:39.144 16:30:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:17:39.144 16:30:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.144 16:30:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:39.144 16:30:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:17:39.144 16:30:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.144 16:30:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:39.144 16:30:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:17:39.144 16:30:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:39.144 16:30:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:39.402 MallocBdevForConfigChangeCheck 00:17:39.402 16:30:58 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:17:39.402 16:30:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.402 16:30:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:39.402 16:30:58 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:17:39.402 16:30:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:39.967 16:30:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:17:39.967 INFO: shutting down applications... 00:17:39.967 16:30:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:17:39.967 16:30:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:17:39.967 16:30:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:17:39.967 16:30:59 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:17:42.613 Calling clear_iscsi_subsystem 00:17:42.613 Calling clear_nvmf_subsystem 00:17:42.613 Calling clear_nbd_subsystem 00:17:42.613 Calling clear_ublk_subsystem 00:17:42.613 Calling clear_vhost_blk_subsystem 00:17:42.613 Calling clear_vhost_scsi_subsystem 00:17:42.613 Calling clear_bdev_subsystem 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@343 -- # count=100 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:17:42.613 16:31:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:17:42.896 16:31:02 json_config -- json_config/json_config.sh@345 -- # break 00:17:42.896 16:31:02 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:17:42.896 16:31:02 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:17:42.896 16:31:02 json_config -- json_config/common.sh@31 -- # local app=target 00:17:42.896 16:31:02 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:42.896 16:31:02 json_config -- json_config/common.sh@35 -- # [[ -n 2657563 ]] 00:17:42.896 16:31:02 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2657563 00:17:42.896 16:31:02 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:42.896 16:31:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:42.896 16:31:02 json_config -- json_config/common.sh@41 -- # kill -0 2657563 00:17:42.896 16:31:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:17:43.181 16:31:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:17:43.181 16:31:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:43.181 16:31:02 json_config -- json_config/common.sh@41 -- # kill -0 2657563 00:17:43.181 16:31:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:43.181 16:31:02 json_config -- json_config/common.sh@43 -- # break 00:17:43.181 16:31:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:43.181 16:31:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:43.181 SPDK target shutdown done 00:17:43.181 16:31:02 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:17:43.181 INFO: relaunching applications... 00:17:43.181 16:31:02 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:43.181 16:31:02 json_config -- json_config/common.sh@9 -- # local app=target 00:17:43.181 16:31:02 json_config -- json_config/common.sh@10 -- # shift 00:17:43.181 16:31:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:43.181 16:31:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:43.181 16:31:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:17:43.181 16:31:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:43.181 16:31:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:43.181 16:31:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2658906 00:17:43.181 16:31:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:43.181 16:31:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:43.181 Waiting for target to run... 00:17:43.181 16:31:02 json_config -- json_config/common.sh@25 -- # waitforlisten 2658906 /var/tmp/spdk_tgt.sock 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@827 -- # '[' -z 2658906 ']' 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:43.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.181 16:31:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:43.181 [2024-07-22 16:31:02.822977] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.181 [2024-07-22 16:31:02.823080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658906 ] 00:17:43.499 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.774 [2024-07-22 16:31:03.349699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.040 [2024-07-22 16:31:03.431583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.502 [2024-07-22 16:31:06.470067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.503 [2024-07-22 16:31:06.502598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:47.780 16:31:07 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.780 16:31:07 json_config -- common/autotest_common.sh@860 -- # return 0 00:17:47.780 16:31:07 json_config -- json_config/common.sh@26 -- # echo '' 00:17:47.780 00:17:47.780 16:31:07 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:17:47.780 16:31:07 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:17:47.780 INFO: Checking if target configuration is the same... 00:17:47.780 16:31:07 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:47.780 16:31:07 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:17:47.780 16:31:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:47.780 + '[' 2 -ne 2 ']' 00:17:47.780 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:17:47.780 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:17:47.781 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:47.781 +++ basename /dev/fd/62 00:17:47.781 ++ mktemp /tmp/62.XXX 00:17:47.781 + tmp_file_1=/tmp/62.nBK 00:17:47.781 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:47.781 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:47.781 + tmp_file_2=/tmp/spdk_tgt_config.json.6wX 00:17:47.781 + ret=0 00:17:47.781 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:17:48.077 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:17:48.077 + diff -u /tmp/62.nBK /tmp/spdk_tgt_config.json.6wX 00:17:48.077 + echo 'INFO: JSON config files are the same' 00:17:48.077 INFO: JSON config files are the same 00:17:48.077 + rm /tmp/62.nBK /tmp/spdk_tgt_config.json.6wX 00:17:48.077 + exit 0 00:17:48.077 16:31:07 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:17:48.077 16:31:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:17:48.077 INFO: changing configuration and checking if this can be detected... 00:17:48.077 16:31:07 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:48.077 16:31:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:48.339 16:31:07 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:48.339 16:31:07 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:17:48.339 16:31:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:48.339 + '[' 2 -ne 2 ']' 00:17:48.339 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:17:48.339 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:17:48.339 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:48.339 +++ basename /dev/fd/62 00:17:48.339 ++ mktemp /tmp/62.XXX 00:17:48.339 + tmp_file_1=/tmp/62.onu 00:17:48.339 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:48.339 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:48.339 + tmp_file_2=/tmp/spdk_tgt_config.json.i94 00:17:48.339 + ret=0 00:17:48.339 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:17:48.905 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:17:48.905 + diff -u /tmp/62.onu /tmp/spdk_tgt_config.json.i94 00:17:48.905 + ret=1 00:17:48.905 + echo '=== Start of file: /tmp/62.onu ===' 00:17:48.905 + cat /tmp/62.onu 00:17:48.905 + echo '=== End of file: /tmp/62.onu ===' 00:17:48.905 + echo '' 00:17:48.905 + echo '=== Start of file: /tmp/spdk_tgt_config.json.i94 ===' 00:17:48.905 + cat /tmp/spdk_tgt_config.json.i94 00:17:48.905 + echo '=== End of file: /tmp/spdk_tgt_config.json.i94 ===' 00:17:48.905 + echo '' 00:17:48.905 + rm /tmp/62.onu /tmp/spdk_tgt_config.json.i94 00:17:48.905 + exit 1 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:17:48.905 INFO: configuration change detected. 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@317 -- # [[ -n 2658906 ]] 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@193 -- # uname -s 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:48.905 16:31:08 json_config -- json_config/json_config.sh@323 -- # killprocess 2658906 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@946 -- # '[' -z 2658906 ']' 00:17:48.905 16:31:08 json_config -- common/autotest_common.sh@950 -- # kill -0 2658906 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@951 -- # uname 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2658906 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2658906' 00:17:48.906 killing process with pid 2658906 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@965 -- # kill 2658906 00:17:48.906 16:31:08 json_config -- common/autotest_common.sh@970 -- # wait 2658906 00:17:51.434 16:31:10 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:17:51.434 16:31:10 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:17:51.434 16:31:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.434 16:31:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.434 16:31:10 json_config -- json_config/json_config.sh@328 -- # return 0 00:17:51.434 16:31:10 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:17:51.434 INFO: Success 00:17:51.434 00:17:51.434 real 0m18.487s 00:17:51.434 user 0m20.374s 00:17:51.434 sys 0m2.042s 00:17:51.434 16:31:10 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:51.434 16:31:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.434 ************************************ 00:17:51.434 END TEST json_config 00:17:51.434 ************************************ 00:17:51.434 16:31:10 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:17:51.434 16:31:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:51.434 16:31:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:51.434 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:17:51.434 ************************************ 00:17:51.434 START TEST json_config_extra_key 00:17:51.434 ************************************ 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.435 16:31:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.435 16:31:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.435 16:31:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.435 16:31:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.435 16:31:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.435 16:31:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.435 16:31:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:51.435 16:31:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.435 16:31:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:51.435 INFO: launching applications... 00:17:51.435 16:31:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2659983 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:51.435 Waiting for target to run... 00:17:51.435 16:31:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2659983 /var/tmp/spdk_tgt.sock 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2659983 ']' 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:51.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.435 16:31:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:51.693 [2024-07-22 16:31:11.127109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:51.694 [2024-07-22 16:31:11.127200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659983 ] 00:17:51.694 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.952 [2024-07-22 16:31:11.500215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.952 [2024-07-22 16:31:11.563634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.518 16:31:12 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.518 16:31:12 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:52.518 00:17:52.518 16:31:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:52.518 INFO: shutting down applications... 00:17:52.518 16:31:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2659983 ]] 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2659983 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2659983 00:17:52.518 16:31:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2659983 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:53.083 16:31:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:53.083 SPDK target shutdown done 00:17:53.083 16:31:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:53.083 Success 00:17:53.083 00:17:53.083 real 0m1.590s 00:17:53.083 user 0m1.540s 00:17:53.083 sys 0m0.460s 00:17:53.083 16:31:12 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:53.083 16:31:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:53.083 ************************************ 00:17:53.083 END TEST json_config_extra_key 00:17:53.083 ************************************ 00:17:53.083 16:31:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:53.083 16:31:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:53.083 16:31:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:53.083 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:53.083 ************************************ 00:17:53.083 START TEST alias_rpc 00:17:53.084 ************************************ 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:53.084 * Looking for test storage... 00:17:53.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:17:53.084 16:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:53.084 16:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2660286 00:17:53.084 16:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:53.084 16:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2660286 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2660286 ']' 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:53.084 16:31:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.342 [2024-07-22 16:31:12.766078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:53.342 [2024-07-22 16:31:12.766157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660286 ] 00:17:53.342 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.342 [2024-07-22 16:31:12.831734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.342 [2024-07-22 16:31:12.915406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.599 16:31:13 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.599 16:31:13 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:17:53.599 16:31:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:17:53.856 16:31:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2660286 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2660286 ']' 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2660286 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2660286 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2660286' 00:17:53.856 killing process with pid 2660286 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@965 -- # kill 2660286 00:17:53.856 16:31:13 alias_rpc -- common/autotest_common.sh@970 -- # wait 2660286 00:17:54.422 00:17:54.422 real 0m1.203s 00:17:54.422 user 0m1.275s 00:17:54.422 sys 0m0.425s 00:17:54.422 16:31:13 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:54.422 16:31:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 ************************************ 00:17:54.422 END TEST alias_rpc 00:17:54.422 ************************************ 00:17:54.422 16:31:13 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:17:54.422 16:31:13 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:17:54.422 16:31:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:54.422 16:31:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:54.422 16:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 ************************************ 00:17:54.422 START TEST spdkcli_tcp 00:17:54.422 ************************************ 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:17:54.422 * Looking for test storage... 00:17:54.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2660479 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:54.422 16:31:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2660479 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2660479 ']' 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:54.422 16:31:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 [2024-07-22 16:31:14.020387] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:54.422 [2024-07-22 16:31:14.020465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660479 ] 00:17:54.422 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.680 [2024-07-22 16:31:14.086430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.680 [2024-07-22 16:31:14.170735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.680 [2024-07-22 16:31:14.170739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.938 16:31:14 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:54.938 16:31:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:17:54.938 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2660487 00:17:54.938 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:54.938 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:55.197 [ 00:17:55.197 "bdev_malloc_delete", 00:17:55.197 "bdev_malloc_create", 00:17:55.197 "bdev_null_resize", 00:17:55.197 "bdev_null_delete", 00:17:55.197 "bdev_null_create", 00:17:55.197 "bdev_nvme_cuse_unregister", 00:17:55.197 "bdev_nvme_cuse_register", 00:17:55.197 "bdev_opal_new_user", 00:17:55.197 "bdev_opal_set_lock_state", 00:17:55.197 "bdev_opal_delete", 00:17:55.197 "bdev_opal_get_info", 00:17:55.197 "bdev_opal_create", 00:17:55.197 "bdev_nvme_opal_revert", 00:17:55.197 "bdev_nvme_opal_init", 00:17:55.197 "bdev_nvme_send_cmd", 00:17:55.197 "bdev_nvme_get_path_iostat", 00:17:55.197 "bdev_nvme_get_mdns_discovery_info", 00:17:55.197 "bdev_nvme_stop_mdns_discovery", 00:17:55.197 "bdev_nvme_start_mdns_discovery", 00:17:55.197 "bdev_nvme_set_multipath_policy", 00:17:55.197 "bdev_nvme_set_preferred_path", 00:17:55.197 "bdev_nvme_get_io_paths", 00:17:55.197 "bdev_nvme_remove_error_injection", 00:17:55.197 "bdev_nvme_add_error_injection", 00:17:55.197 "bdev_nvme_get_discovery_info", 00:17:55.197 "bdev_nvme_stop_discovery", 00:17:55.197 "bdev_nvme_start_discovery", 00:17:55.197 "bdev_nvme_get_controller_health_info", 00:17:55.197 "bdev_nvme_disable_controller", 00:17:55.197 "bdev_nvme_enable_controller", 00:17:55.197 "bdev_nvme_reset_controller", 00:17:55.197 "bdev_nvme_get_transport_statistics", 00:17:55.197 "bdev_nvme_apply_firmware", 00:17:55.197 "bdev_nvme_detach_controller", 00:17:55.197 "bdev_nvme_get_controllers", 00:17:55.197 "bdev_nvme_attach_controller", 00:17:55.197 "bdev_nvme_set_hotplug", 00:17:55.197 "bdev_nvme_set_options", 00:17:55.197 "bdev_passthru_delete", 00:17:55.197 "bdev_passthru_create", 00:17:55.197 "bdev_lvol_set_parent_bdev", 00:17:55.197 "bdev_lvol_set_parent", 00:17:55.197 "bdev_lvol_check_shallow_copy", 00:17:55.197 "bdev_lvol_start_shallow_copy", 00:17:55.197 "bdev_lvol_grow_lvstore", 00:17:55.197 "bdev_lvol_get_lvols", 00:17:55.197 "bdev_lvol_get_lvstores", 00:17:55.197 "bdev_lvol_delete", 00:17:55.197 "bdev_lvol_set_read_only", 00:17:55.197 "bdev_lvol_resize", 00:17:55.197 "bdev_lvol_decouple_parent", 00:17:55.197 "bdev_lvol_inflate", 00:17:55.197 "bdev_lvol_rename", 00:17:55.197 "bdev_lvol_clone_bdev", 00:17:55.197 "bdev_lvol_clone", 00:17:55.197 "bdev_lvol_snapshot", 00:17:55.197 "bdev_lvol_create", 00:17:55.197 "bdev_lvol_delete_lvstore", 00:17:55.197 "bdev_lvol_rename_lvstore", 00:17:55.197 "bdev_lvol_create_lvstore", 00:17:55.197 "bdev_raid_set_options", 00:17:55.197 "bdev_raid_remove_base_bdev", 00:17:55.197 "bdev_raid_add_base_bdev", 00:17:55.197 "bdev_raid_delete", 00:17:55.197 "bdev_raid_create", 00:17:55.197 "bdev_raid_get_bdevs", 00:17:55.197 "bdev_error_inject_error", 00:17:55.197 "bdev_error_delete", 00:17:55.197 "bdev_error_create", 00:17:55.197 "bdev_split_delete", 00:17:55.197 "bdev_split_create", 00:17:55.197 "bdev_delay_delete", 00:17:55.197 "bdev_delay_create", 00:17:55.197 "bdev_delay_update_latency", 00:17:55.197 "bdev_zone_block_delete", 00:17:55.197 "bdev_zone_block_create", 00:17:55.197 "blobfs_create", 00:17:55.197 "blobfs_detect", 00:17:55.197 "blobfs_set_cache_size", 00:17:55.197 "bdev_aio_delete", 00:17:55.197 "bdev_aio_rescan", 00:17:55.197 "bdev_aio_create", 00:17:55.197 "bdev_ftl_set_property", 00:17:55.197 "bdev_ftl_get_properties", 00:17:55.197 "bdev_ftl_get_stats", 00:17:55.197 "bdev_ftl_unmap", 00:17:55.197 "bdev_ftl_unload", 00:17:55.197 "bdev_ftl_delete", 00:17:55.197 "bdev_ftl_load", 00:17:55.197 "bdev_ftl_create", 00:17:55.197 "bdev_virtio_attach_controller", 00:17:55.197 "bdev_virtio_scsi_get_devices", 00:17:55.197 "bdev_virtio_detach_controller", 00:17:55.197 "bdev_virtio_blk_set_hotplug", 00:17:55.197 "bdev_iscsi_delete", 00:17:55.197 "bdev_iscsi_create", 00:17:55.197 "bdev_iscsi_set_options", 00:17:55.197 "accel_error_inject_error", 00:17:55.197 "ioat_scan_accel_module", 00:17:55.197 "dsa_scan_accel_module", 00:17:55.197 "iaa_scan_accel_module", 00:17:55.197 "vfu_virtio_create_scsi_endpoint", 00:17:55.197 "vfu_virtio_scsi_remove_target", 00:17:55.197 "vfu_virtio_scsi_add_target", 00:17:55.197 "vfu_virtio_create_blk_endpoint", 00:17:55.197 "vfu_virtio_delete_endpoint", 00:17:55.197 "keyring_file_remove_key", 00:17:55.197 "keyring_file_add_key", 00:17:55.197 "keyring_linux_set_options", 00:17:55.197 "iscsi_get_histogram", 00:17:55.197 "iscsi_enable_histogram", 00:17:55.197 "iscsi_set_options", 00:17:55.197 "iscsi_get_auth_groups", 00:17:55.197 "iscsi_auth_group_remove_secret", 00:17:55.197 "iscsi_auth_group_add_secret", 00:17:55.197 "iscsi_delete_auth_group", 00:17:55.197 "iscsi_create_auth_group", 00:17:55.197 "iscsi_set_discovery_auth", 00:17:55.197 "iscsi_get_options", 00:17:55.197 "iscsi_target_node_request_logout", 00:17:55.197 "iscsi_target_node_set_redirect", 00:17:55.197 "iscsi_target_node_set_auth", 00:17:55.197 "iscsi_target_node_add_lun", 00:17:55.197 "iscsi_get_stats", 00:17:55.197 "iscsi_get_connections", 00:17:55.197 "iscsi_portal_group_set_auth", 00:17:55.197 "iscsi_start_portal_group", 00:17:55.197 "iscsi_delete_portal_group", 00:17:55.197 "iscsi_create_portal_group", 00:17:55.197 "iscsi_get_portal_groups", 00:17:55.197 "iscsi_delete_target_node", 00:17:55.197 "iscsi_target_node_remove_pg_ig_maps", 00:17:55.197 "iscsi_target_node_add_pg_ig_maps", 00:17:55.197 "iscsi_create_target_node", 00:17:55.197 "iscsi_get_target_nodes", 00:17:55.197 "iscsi_delete_initiator_group", 00:17:55.197 "iscsi_initiator_group_remove_initiators", 00:17:55.197 "iscsi_initiator_group_add_initiators", 00:17:55.197 "iscsi_create_initiator_group", 00:17:55.197 "iscsi_get_initiator_groups", 00:17:55.197 "nvmf_set_crdt", 00:17:55.197 "nvmf_set_config", 00:17:55.197 "nvmf_set_max_subsystems", 00:17:55.197 "nvmf_stop_mdns_prr", 00:17:55.197 "nvmf_publish_mdns_prr", 00:17:55.197 "nvmf_subsystem_get_listeners", 00:17:55.197 "nvmf_subsystem_get_qpairs", 00:17:55.197 "nvmf_subsystem_get_controllers", 00:17:55.197 "nvmf_get_stats", 00:17:55.197 "nvmf_get_transports", 00:17:55.197 "nvmf_create_transport", 00:17:55.197 "nvmf_get_targets", 00:17:55.197 "nvmf_delete_target", 00:17:55.197 "nvmf_create_target", 00:17:55.197 "nvmf_subsystem_allow_any_host", 00:17:55.197 "nvmf_subsystem_remove_host", 00:17:55.197 "nvmf_subsystem_add_host", 00:17:55.197 "nvmf_ns_remove_host", 00:17:55.197 "nvmf_ns_add_host", 00:17:55.197 "nvmf_subsystem_remove_ns", 00:17:55.197 "nvmf_subsystem_add_ns", 00:17:55.197 "nvmf_subsystem_listener_set_ana_state", 00:17:55.197 "nvmf_discovery_get_referrals", 00:17:55.197 "nvmf_discovery_remove_referral", 00:17:55.197 "nvmf_discovery_add_referral", 00:17:55.197 "nvmf_subsystem_remove_listener", 00:17:55.197 "nvmf_subsystem_add_listener", 00:17:55.197 "nvmf_delete_subsystem", 00:17:55.197 "nvmf_create_subsystem", 00:17:55.197 "nvmf_get_subsystems", 00:17:55.197 "env_dpdk_get_mem_stats", 00:17:55.197 "nbd_get_disks", 00:17:55.197 "nbd_stop_disk", 00:17:55.197 "nbd_start_disk", 00:17:55.197 "ublk_recover_disk", 00:17:55.197 "ublk_get_disks", 00:17:55.197 "ublk_stop_disk", 00:17:55.197 "ublk_start_disk", 00:17:55.197 "ublk_destroy_target", 00:17:55.198 "ublk_create_target", 00:17:55.198 "virtio_blk_create_transport", 00:17:55.198 "virtio_blk_get_transports", 00:17:55.198 "vhost_controller_set_coalescing", 00:17:55.198 "vhost_get_controllers", 00:17:55.198 "vhost_delete_controller", 00:17:55.198 "vhost_create_blk_controller", 00:17:55.198 "vhost_scsi_controller_remove_target", 00:17:55.198 "vhost_scsi_controller_add_target", 00:17:55.198 "vhost_start_scsi_controller", 00:17:55.198 "vhost_create_scsi_controller", 00:17:55.198 "thread_set_cpumask", 00:17:55.198 "framework_get_scheduler", 00:17:55.198 "framework_set_scheduler", 00:17:55.198 "framework_get_reactors", 00:17:55.198 "thread_get_io_channels", 00:17:55.198 "thread_get_pollers", 00:17:55.198 "thread_get_stats", 00:17:55.198 "framework_monitor_context_switch", 00:17:55.198 "spdk_kill_instance", 00:17:55.198 "log_enable_timestamps", 00:17:55.198 "log_get_flags", 00:17:55.198 "log_clear_flag", 00:17:55.198 "log_set_flag", 00:17:55.198 "log_get_level", 00:17:55.198 "log_set_level", 00:17:55.198 "log_get_print_level", 00:17:55.198 "log_set_print_level", 00:17:55.198 "framework_enable_cpumask_locks", 00:17:55.198 "framework_disable_cpumask_locks", 00:17:55.198 "framework_wait_init", 00:17:55.198 "framework_start_init", 00:17:55.198 "scsi_get_devices", 00:17:55.198 "bdev_get_histogram", 00:17:55.198 "bdev_enable_histogram", 00:17:55.198 "bdev_set_qos_limit", 00:17:55.198 "bdev_set_qd_sampling_period", 00:17:55.198 "bdev_get_bdevs", 00:17:55.198 "bdev_reset_iostat", 00:17:55.198 "bdev_get_iostat", 00:17:55.198 "bdev_examine", 00:17:55.198 "bdev_wait_for_examine", 00:17:55.198 "bdev_set_options", 00:17:55.198 "notify_get_notifications", 00:17:55.198 "notify_get_types", 00:17:55.198 "accel_get_stats", 00:17:55.198 "accel_set_options", 00:17:55.198 "accel_set_driver", 00:17:55.198 "accel_crypto_key_destroy", 00:17:55.198 "accel_crypto_keys_get", 00:17:55.198 "accel_crypto_key_create", 00:17:55.198 "accel_assign_opc", 00:17:55.198 "accel_get_module_info", 00:17:55.198 "accel_get_opc_assignments", 00:17:55.198 "vmd_rescan", 00:17:55.198 "vmd_remove_device", 00:17:55.198 "vmd_enable", 00:17:55.198 "sock_get_default_impl", 00:17:55.198 "sock_set_default_impl", 00:17:55.198 "sock_impl_set_options", 00:17:55.198 "sock_impl_get_options", 00:17:55.198 "iobuf_get_stats", 00:17:55.198 "iobuf_set_options", 00:17:55.198 "keyring_get_keys", 00:17:55.198 "framework_get_pci_devices", 00:17:55.198 "framework_get_config", 00:17:55.198 "framework_get_subsystems", 00:17:55.198 "vfu_tgt_set_base_path", 00:17:55.198 "trace_get_info", 00:17:55.198 "trace_get_tpoint_group_mask", 00:17:55.198 "trace_disable_tpoint_group", 00:17:55.198 "trace_enable_tpoint_group", 00:17:55.198 "trace_clear_tpoint_mask", 00:17:55.198 "trace_set_tpoint_mask", 00:17:55.198 "spdk_get_version", 00:17:55.198 "rpc_get_methods" 00:17:55.198 ] 00:17:55.198 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.198 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:55.198 16:31:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2660479 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2660479 ']' 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2660479 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2660479 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2660479' 00:17:55.198 killing process with pid 2660479 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2660479 00:17:55.198 16:31:14 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2660479 00:17:55.764 00:17:55.764 real 0m1.223s 00:17:55.764 user 0m2.182s 00:17:55.764 sys 0m0.448s 00:17:55.764 16:31:15 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.764 16:31:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.764 ************************************ 00:17:55.764 END TEST spdkcli_tcp 00:17:55.764 ************************************ 00:17:55.765 16:31:15 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:55.765 16:31:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:55.765 16:31:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.765 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 ************************************ 00:17:55.765 START TEST dpdk_mem_utility 00:17:55.765 ************************************ 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:55.765 * Looking for test storage... 00:17:55.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:17:55.765 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:17:55.765 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2660679 00:17:55.765 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:17:55.765 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2660679 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2660679 ']' 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.765 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:55.765 [2024-07-22 16:31:15.292786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:55.765 [2024-07-22 16:31:15.292881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660679 ] 00:17:55.765 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.765 [2024-07-22 16:31:15.360874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.023 [2024-07-22 16:31:15.445867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.282 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:56.282 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:17:56.282 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:56.282 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:56.282 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.282 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:56.282 { 00:17:56.282 "filename": "/tmp/spdk_mem_dump.txt" 00:17:56.282 } 00:17:56.282 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.282 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:17:56.282 DPDK memory size 814.000000 MiB in 1 heap(s) 00:17:56.282 1 heaps totaling size 814.000000 MiB 00:17:56.282 size: 814.000000 MiB heap id: 0 00:17:56.282 end heaps---------- 00:17:56.282 8 mempools totaling size 598.116089 MiB 00:17:56.282 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:56.282 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:56.282 size: 84.521057 MiB name: bdev_io_2660679 00:17:56.282 size: 51.011292 MiB name: evtpool_2660679 00:17:56.282 size: 50.003479 MiB name: msgpool_2660679 00:17:56.282 size: 21.763794 MiB name: PDU_Pool 00:17:56.282 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:56.282 size: 0.026123 MiB name: Session_Pool 00:17:56.282 end mempools------- 00:17:56.282 6 memzones totaling size 4.142822 MiB 00:17:56.282 size: 1.000366 MiB name: RG_ring_0_2660679 00:17:56.282 size: 1.000366 MiB name: RG_ring_1_2660679 00:17:56.282 size: 1.000366 MiB name: RG_ring_4_2660679 00:17:56.282 size: 1.000366 MiB name: RG_ring_5_2660679 00:17:56.282 size: 0.125366 MiB name: RG_ring_2_2660679 00:17:56.282 size: 0.015991 MiB name: RG_ring_3_2660679 00:17:56.282 end memzones------- 00:17:56.282 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:17:56.282 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:17:56.282 list of free elements. size: 12.519348 MiB 00:17:56.282 element at address: 0x200000400000 with size: 1.999512 MiB 00:17:56.282 element at address: 0x200018e00000 with size: 0.999878 MiB 00:17:56.282 element at address: 0x200019000000 with size: 0.999878 MiB 00:17:56.282 element at address: 0x200003e00000 with size: 0.996277 MiB 00:17:56.282 element at address: 0x200031c00000 with size: 0.994446 MiB 00:17:56.282 element at address: 0x200013800000 with size: 0.978699 MiB 00:17:56.282 element at address: 0x200007000000 with size: 0.959839 MiB 00:17:56.282 element at address: 0x200019200000 with size: 0.936584 MiB 00:17:56.282 element at address: 0x200000200000 with size: 0.841614 MiB 00:17:56.282 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:17:56.282 element at address: 0x20000b200000 with size: 0.490723 MiB 00:17:56.282 element at address: 0x200000800000 with size: 0.487793 MiB 00:17:56.282 element at address: 0x200019400000 with size: 0.485657 MiB 00:17:56.282 element at address: 0x200027e00000 with size: 0.410034 MiB 00:17:56.282 element at address: 0x200003a00000 with size: 0.355530 MiB 00:17:56.282 list of standard malloc elements. size: 199.218079 MiB 00:17:56.282 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:17:56.282 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:17:56.282 element at address: 0x200018efff80 with size: 1.000122 MiB 00:17:56.282 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:17:56.282 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:17:56.282 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:17:56.282 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:17:56.282 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:17:56.282 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:17:56.282 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003adb300 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003adb500 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003affa80 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003affb40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:17:56.282 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:17:56.282 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200027e69040 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:17:56.282 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:17:56.282 list of memzone associated elements. size: 602.262573 MiB 00:17:56.282 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:17:56.282 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:56.282 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:17:56.282 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:56.282 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:17:56.282 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2660679_0 00:17:56.282 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:17:56.282 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2660679_0 00:17:56.282 element at address: 0x200003fff380 with size: 48.003052 MiB 00:17:56.282 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2660679_0 00:17:56.282 element at address: 0x2000195be940 with size: 20.255554 MiB 00:17:56.282 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:56.282 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:17:56.282 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:56.282 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:17:56.282 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2660679 00:17:56.282 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:17:56.282 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2660679 00:17:56.282 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:17:56.282 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2660679 00:17:56.282 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:17:56.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:56.282 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:17:56.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:56.282 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:17:56.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:56.282 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:17:56.282 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:56.282 element at address: 0x200003eff180 with size: 1.000488 MiB 00:17:56.282 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2660679 00:17:56.282 element at address: 0x200003affc00 with size: 1.000488 MiB 00:17:56.282 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2660679 00:17:56.282 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:17:56.282 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2660679 00:17:56.282 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:17:56.282 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2660679 00:17:56.282 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:17:56.282 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2660679 00:17:56.282 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:17:56.282 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:56.282 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:17:56.282 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:56.282 element at address: 0x20001947c540 with size: 0.250488 MiB 00:17:56.282 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:56.282 element at address: 0x200003adf880 with size: 0.125488 MiB 00:17:56.282 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2660679 00:17:56.282 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:17:56.282 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:56.282 element at address: 0x200027e69100 with size: 0.023743 MiB 00:17:56.282 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:56.282 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:17:56.283 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2660679 00:17:56.283 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:17:56.283 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:56.283 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:17:56.283 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2660679 00:17:56.283 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:17:56.283 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2660679 00:17:56.283 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:17:56.283 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:56.283 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:56.283 16:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2660679 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2660679 ']' 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2660679 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2660679 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2660679' 00:17:56.283 killing process with pid 2660679 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2660679 00:17:56.283 16:31:15 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2660679 00:17:56.849 00:17:56.849 real 0m1.066s 00:17:56.849 user 0m1.015s 00:17:56.849 sys 0m0.414s 00:17:56.849 16:31:16 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:56.849 16:31:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 ************************************ 00:17:56.849 END TEST dpdk_mem_utility 00:17:56.849 ************************************ 00:17:56.849 16:31:16 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:17:56.849 16:31:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:56.849 16:31:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:56.849 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 ************************************ 00:17:56.849 START TEST event 00:17:56.849 ************************************ 00:17:56.849 16:31:16 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:17:56.849 * Looking for test storage... 00:17:56.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:17:56.849 16:31:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:17:56.849 16:31:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:17:56.849 16:31:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:56.849 16:31:16 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:17:56.849 16:31:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:56.849 16:31:16 event -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 ************************************ 00:17:56.849 START TEST event_perf 00:17:56.849 ************************************ 00:17:56.849 16:31:16 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:56.849 Running I/O for 1 seconds...[2024-07-22 16:31:16.393578] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:56.849 [2024-07-22 16:31:16.393645] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660867 ] 00:17:56.849 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.849 [2024-07-22 16:31:16.465694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.107 [2024-07-22 16:31:16.558877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.107 [2024-07-22 16:31:16.558945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.107 [2024-07-22 16:31:16.559038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.107 [2024-07-22 16:31:16.559042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.040 Running I/O for 1 seconds... 00:17:58.040 lcore 0: 232910 00:17:58.040 lcore 1: 232909 00:17:58.040 lcore 2: 232909 00:17:58.040 lcore 3: 232908 00:17:58.040 done. 00:17:58.040 00:17:58.040 real 0m1.261s 00:17:58.040 user 0m4.150s 00:17:58.040 sys 0m0.106s 00:17:58.040 16:31:17 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.040 16:31:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.040 ************************************ 00:17:58.040 END TEST event_perf 00:17:58.040 ************************************ 00:17:58.040 16:31:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:17:58.040 16:31:17 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:58.040 16:31:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.040 16:31:17 event -- common/autotest_common.sh@10 -- # set +x 00:17:58.040 ************************************ 00:17:58.040 START TEST event_reactor 00:17:58.040 ************************************ 00:17:58.040 16:31:17 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:17:58.298 [2024-07-22 16:31:17.696586] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:58.298 [2024-07-22 16:31:17.696652] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661028 ] 00:17:58.298 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.298 [2024-07-22 16:31:17.765131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.298 [2024-07-22 16:31:17.856919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.672 test_start 00:17:59.672 oneshot 00:17:59.672 tick 100 00:17:59.672 tick 100 00:17:59.672 tick 250 00:17:59.672 tick 100 00:17:59.672 tick 100 00:17:59.672 tick 100 00:17:59.672 tick 250 00:17:59.672 tick 500 00:17:59.672 tick 100 00:17:59.672 tick 100 00:17:59.672 tick 250 00:17:59.672 tick 100 00:17:59.672 tick 100 00:17:59.672 test_end 00:17:59.672 00:17:59.672 real 0m1.251s 00:17:59.672 user 0m1.162s 00:17:59.672 sys 0m0.084s 00:17:59.672 16:31:18 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.672 16:31:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:17:59.672 ************************************ 00:17:59.672 END TEST event_reactor 00:17:59.672 ************************************ 00:17:59.672 16:31:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:59.672 16:31:18 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:59.672 16:31:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.672 16:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:17:59.672 ************************************ 00:17:59.672 START TEST event_reactor_perf 00:17:59.672 ************************************ 00:17:59.672 16:31:18 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:59.672 [2024-07-22 16:31:18.994524] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:59.672 [2024-07-22 16:31:18.994592] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661199 ] 00:17:59.672 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.672 [2024-07-22 16:31:19.067985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.672 [2024-07-22 16:31:19.159322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.609 test_start 00:18:00.609 test_end 00:18:00.609 Performance: 351359 events per second 00:18:00.609 00:18:00.609 real 0m1.256s 00:18:00.609 user 0m1.165s 00:18:00.609 sys 0m0.085s 00:18:00.609 16:31:20 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:00.609 16:31:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:00.609 ************************************ 00:18:00.609 END TEST event_reactor_perf 00:18:00.609 ************************************ 00:18:00.609 16:31:20 event -- event/event.sh@49 -- # uname -s 00:18:00.866 16:31:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:00.867 16:31:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:18:00.867 16:31:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:00.867 16:31:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:00.867 16:31:20 event -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 ************************************ 00:18:00.867 START TEST event_scheduler 00:18:00.867 ************************************ 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:18:00.867 * Looking for test storage... 00:18:00.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:18:00.867 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:00.867 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2661486 00:18:00.867 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:00.867 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:00.867 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2661486 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2661486 ']' 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.867 16:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 [2024-07-22 16:31:20.373698] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:00.867 [2024-07-22 16:31:20.373770] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661486 ] 00:18:00.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.867 [2024-07-22 16:31:20.439433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.124 [2024-07-22 16:31:20.526937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.124 [2024-07-22 16:31:20.527002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.124 [2024-07-22 16:31:20.527066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.124 [2024-07-22 16:31:20.527069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:18:01.124 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 POWER: Env isn't set yet! 00:18:01.124 POWER: Attempting to initialise ACPI cpufreq power management... 00:18:01.124 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:18:01.124 POWER: Cannot get available frequencies of lcore 0 00:18:01.124 POWER: Attempting to initialise PSTAT power management... 00:18:01.124 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:18:01.124 POWER: Initialized successfully for lcore 0 power management 00:18:01.124 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:18:01.124 POWER: Initialized successfully for lcore 1 power management 00:18:01.124 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:18:01.124 POWER: Initialized successfully for lcore 2 power management 00:18:01.124 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:18:01.124 POWER: Initialized successfully for lcore 3 power management 00:18:01.124 [2024-07-22 16:31:20.614149] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:01.124 [2024-07-22 16:31:20.614166] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:01.124 [2024-07-22 16:31:20.614176] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.124 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 [2024-07-22 16:31:20.715367] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.124 16:31:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 ************************************ 00:18:01.124 START TEST scheduler_create_thread 00:18:01.124 ************************************ 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 2 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 3 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.124 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 4 00:18:01.125 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.125 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:01.125 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.125 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 5 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 6 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 7 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 8 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 9 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 10 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.382 16:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:02.755 16:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.755 16:31:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:02.755 16:31:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:02.755 16:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.755 16:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:04.128 16:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.128 00:18:04.128 real 0m2.618s 00:18:04.128 user 0m0.010s 00:18:04.128 sys 0m0.005s 00:18:04.128 16:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.128 16:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:04.128 ************************************ 00:18:04.128 END TEST scheduler_create_thread 00:18:04.128 ************************************ 00:18:04.128 16:31:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:04.128 16:31:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2661486 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2661486 ']' 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2661486 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2661486 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2661486' 00:18:04.128 killing process with pid 2661486 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2661486 00:18:04.128 16:31:23 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2661486 00:18:04.387 [2024-07-22 16:31:23.842195] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:04.387 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:18:04.387 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:18:04.387 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:18:04.387 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:18:04.387 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:18:04.387 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:18:04.387 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:18:04.387 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:18:04.646 00:18:04.646 real 0m3.792s 00:18:04.646 user 0m5.768s 00:18:04.646 sys 0m0.332s 00:18:04.646 16:31:24 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.646 16:31:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:04.646 ************************************ 00:18:04.646 END TEST event_scheduler 00:18:04.646 ************************************ 00:18:04.646 16:31:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:04.646 16:31:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:04.646 16:31:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:04.646 16:31:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.646 16:31:24 event -- common/autotest_common.sh@10 -- # set +x 00:18:04.646 ************************************ 00:18:04.646 START TEST app_repeat 00:18:04.646 ************************************ 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2661940 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2661940' 00:18:04.646 Process app_repeat pid: 2661940 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:04.646 spdk_app_start Round 0 00:18:04.646 16:31:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2661940 /var/tmp/spdk-nbd.sock 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2661940 ']' 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:04.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:04.646 16:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:04.646 [2024-07-22 16:31:24.155666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:04.646 [2024-07-22 16:31:24.155736] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661940 ] 00:18:04.646 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.646 [2024-07-22 16:31:24.228657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:04.904 [2024-07-22 16:31:24.318878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.904 [2024-07-22 16:31:24.318884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.904 16:31:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.904 16:31:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:18:04.904 16:31:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:05.162 Malloc0 00:18:05.162 16:31:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:05.420 Malloc1 00:18:05.420 16:31:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.420 16:31:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:05.678 /dev/nbd0 00:18:05.678 16:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:05.678 16:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:05.678 16:31:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:05.679 1+0 records in 00:18:05.679 1+0 records out 00:18:05.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184479 s, 22.2 MB/s 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:05.679 16:31:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:05.679 16:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.679 16:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.679 16:31:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:05.936 /dev/nbd1 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:05.936 1+0 records in 00:18:05.936 1+0 records out 00:18:05.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026385 s, 15.5 MB/s 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:05.936 16:31:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.936 16:31:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:06.194 { 00:18:06.194 "nbd_device": "/dev/nbd0", 00:18:06.194 "bdev_name": "Malloc0" 00:18:06.194 }, 00:18:06.194 { 00:18:06.194 "nbd_device": "/dev/nbd1", 00:18:06.194 "bdev_name": "Malloc1" 00:18:06.194 } 00:18:06.194 ]' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:06.194 { 00:18:06.194 "nbd_device": "/dev/nbd0", 00:18:06.194 "bdev_name": "Malloc0" 00:18:06.194 }, 00:18:06.194 { 00:18:06.194 "nbd_device": "/dev/nbd1", 00:18:06.194 "bdev_name": "Malloc1" 00:18:06.194 } 00:18:06.194 ]' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:06.194 /dev/nbd1' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:06.194 /dev/nbd1' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:06.194 256+0 records in 00:18:06.194 256+0 records out 00:18:06.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399892 s, 262 MB/s 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:06.194 256+0 records in 00:18:06.194 256+0 records out 00:18:06.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236812 s, 44.3 MB/s 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:06.194 16:31:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:06.452 256+0 records in 00:18:06.452 256+0 records out 00:18:06.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257525 s, 40.7 MB/s 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.452 16:31:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:06.453 16:31:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.453 16:31:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.710 16:31:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.968 16:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:07.226 16:31:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:07.226 16:31:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:07.484 16:31:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:07.743 [2024-07-22 16:31:27.188641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:07.743 [2024-07-22 16:31:27.279033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.743 [2024-07-22 16:31:27.279033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.743 [2024-07-22 16:31:27.340640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:07.743 [2024-07-22 16:31:27.340718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:11.022 16:31:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:11.022 16:31:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:18:11.022 spdk_app_start Round 1 00:18:11.022 16:31:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2661940 /var/tmp/spdk-nbd.sock 00:18:11.022 16:31:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2661940 ']' 00:18:11.022 16:31:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:11.022 16:31:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.022 16:31:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:11.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:11.023 16:31:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.023 16:31:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:11.023 16:31:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.023 16:31:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:18:11.023 16:31:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:11.023 Malloc0 00:18:11.023 16:31:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:11.281 Malloc1 00:18:11.281 16:31:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.281 16:31:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:11.538 /dev/nbd0 00:18:11.538 16:31:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.538 16:31:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:11.538 1+0 records in 00:18:11.538 1+0 records out 00:18:11.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00013708 s, 29.9 MB/s 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:11.538 16:31:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:11.538 16:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.538 16:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.538 16:31:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:11.796 /dev/nbd1 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:11.796 1+0 records in 00:18:11.796 1+0 records out 00:18:11.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215007 s, 19.1 MB/s 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:11.796 16:31:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.796 16:31:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:12.053 { 00:18:12.053 "nbd_device": "/dev/nbd0", 00:18:12.053 "bdev_name": "Malloc0" 00:18:12.053 }, 00:18:12.053 { 00:18:12.053 "nbd_device": "/dev/nbd1", 00:18:12.053 "bdev_name": "Malloc1" 00:18:12.053 } 00:18:12.053 ]' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:12.053 { 00:18:12.053 "nbd_device": "/dev/nbd0", 00:18:12.053 "bdev_name": "Malloc0" 00:18:12.053 }, 00:18:12.053 { 00:18:12.053 "nbd_device": "/dev/nbd1", 00:18:12.053 "bdev_name": "Malloc1" 00:18:12.053 } 00:18:12.053 ]' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:12.053 /dev/nbd1' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:12.053 /dev/nbd1' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:12.053 256+0 records in 00:18:12.053 256+0 records out 00:18:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501034 s, 209 MB/s 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:12.053 256+0 records in 00:18:12.053 256+0 records out 00:18:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020571 s, 51.0 MB/s 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:12.053 256+0 records in 00:18:12.053 256+0 records out 00:18:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253943 s, 41.3 MB/s 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.053 16:31:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.311 16:31:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.568 16:31:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:12.826 16:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:13.084 16:31:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:13.084 16:31:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:13.341 16:31:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:13.599 [2024-07-22 16:31:33.001561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.599 [2024-07-22 16:31:33.091712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.599 [2024-07-22 16:31:33.091716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.599 [2024-07-22 16:31:33.153626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:13.599 [2024-07-22 16:31:33.153707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:16.138 16:31:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:16.138 16:31:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:18:16.138 spdk_app_start Round 2 00:18:16.138 16:31:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2661940 /var/tmp/spdk-nbd.sock 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2661940 ']' 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:16.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.138 16:31:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:16.395 16:31:36 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.395 16:31:36 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:18:16.395 16:31:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:16.653 Malloc0 00:18:16.653 16:31:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:16.911 Malloc1 00:18:16.911 16:31:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:16.911 16:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:17.169 /dev/nbd0 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.169 16:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:17.169 16:31:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:17.169 1+0 records in 00:18:17.169 1+0 records out 00:18:17.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185688 s, 22.1 MB/s 00:18:17.426 16:31:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:17.426 16:31:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:17.426 16:31:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:17.426 16:31:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:17.426 16:31:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:17.426 16:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.426 16:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.426 16:31:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:17.426 /dev/nbd1 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:17.684 1+0 records in 00:18:17.684 1+0 records out 00:18:17.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181414 s, 22.6 MB/s 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:18:17.684 16:31:37 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.684 16:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:17.942 { 00:18:17.942 "nbd_device": "/dev/nbd0", 00:18:17.942 "bdev_name": "Malloc0" 00:18:17.942 }, 00:18:17.942 { 00:18:17.942 "nbd_device": "/dev/nbd1", 00:18:17.942 "bdev_name": "Malloc1" 00:18:17.942 } 00:18:17.942 ]' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:17.942 { 00:18:17.942 "nbd_device": "/dev/nbd0", 00:18:17.942 "bdev_name": "Malloc0" 00:18:17.942 }, 00:18:17.942 { 00:18:17.942 "nbd_device": "/dev/nbd1", 00:18:17.942 "bdev_name": "Malloc1" 00:18:17.942 } 00:18:17.942 ]' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:17.942 /dev/nbd1' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:17.942 /dev/nbd1' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:17.942 256+0 records in 00:18:17.942 256+0 records out 00:18:17.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453767 s, 231 MB/s 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:17.942 256+0 records in 00:18:17.942 256+0 records out 00:18:17.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239473 s, 43.8 MB/s 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:17.942 256+0 records in 00:18:17.942 256+0 records out 00:18:17.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250915 s, 41.8 MB/s 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.942 16:31:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.200 16:31:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:18.458 16:31:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:18.458 16:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.458 16:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:18.458 16:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.458 16:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:18.716 16:31:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:18.716 16:31:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:18.974 16:31:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:19.232 [2024-07-22 16:31:38.783088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.232 [2024-07-22 16:31:38.873492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.232 [2024-07-22 16:31:38.873497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.490 [2024-07-22 16:31:38.935630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:19.490 [2024-07-22 16:31:38.935709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:22.016 16:31:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2661940 /var/tmp/spdk-nbd.sock 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2661940 ']' 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:22.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.016 16:31:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:22.273 16:31:41 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:22.273 16:31:41 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:18:22.274 16:31:41 event.app_repeat -- event/event.sh@39 -- # killprocess 2661940 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2661940 ']' 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2661940 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2661940 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2661940' 00:18:22.274 killing process with pid 2661940 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2661940 00:18:22.274 16:31:41 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2661940 00:18:22.532 spdk_app_start is called in Round 0. 00:18:22.532 Shutdown signal received, stop current app iteration 00:18:22.532 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:18:22.532 spdk_app_start is called in Round 1. 00:18:22.532 Shutdown signal received, stop current app iteration 00:18:22.532 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:18:22.532 spdk_app_start is called in Round 2. 00:18:22.532 Shutdown signal received, stop current app iteration 00:18:22.532 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:18:22.532 spdk_app_start is called in Round 3. 00:18:22.532 Shutdown signal received, stop current app iteration 00:18:22.532 16:31:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:22.532 16:31:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:18:22.532 00:18:22.532 real 0m17.908s 00:18:22.532 user 0m38.910s 00:18:22.532 sys 0m3.239s 00:18:22.532 16:31:42 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:22.532 16:31:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 ************************************ 00:18:22.532 END TEST app_repeat 00:18:22.532 ************************************ 00:18:22.532 16:31:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:22.532 16:31:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:18:22.532 16:31:42 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:22.532 16:31:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:22.532 16:31:42 event -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 ************************************ 00:18:22.532 START TEST cpu_locks 00:18:22.532 ************************************ 00:18:22.532 16:31:42 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:18:22.532 * Looking for test storage... 00:18:22.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:18:22.532 16:31:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:22.532 16:31:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:22.532 16:31:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:22.532 16:31:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:22.532 16:31:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:22.532 16:31:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:22.532 16:31:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:22.532 ************************************ 00:18:22.532 START TEST default_locks 00:18:22.532 ************************************ 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2664285 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2664285 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2664285 ']' 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.532 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:22.791 [2024-07-22 16:31:42.216161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:22.791 [2024-07-22 16:31:42.216242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664285 ] 00:18:22.791 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.791 [2024-07-22 16:31:42.283055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.791 [2024-07-22 16:31:42.368094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.049 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.049 16:31:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:18:23.049 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2664285 00:18:23.049 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2664285 00:18:23.049 16:31:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:23.614 lslocks: write error 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2664285 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2664285 ']' 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2664285 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2664285 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2664285' 00:18:23.614 killing process with pid 2664285 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2664285 00:18:23.614 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2664285 00:18:24.179 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2664285 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2664285 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2664285 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2664285 ']' 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2664285) - No such process 00:18:24.180 ERROR: process (pid: 2664285) is no longer running 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:24.180 00:18:24.180 real 0m1.368s 00:18:24.180 user 0m1.282s 00:18:24.180 sys 0m0.585s 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.180 16:31:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 ************************************ 00:18:24.180 END TEST default_locks 00:18:24.180 ************************************ 00:18:24.180 16:31:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:24.180 16:31:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:24.180 16:31:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:24.180 16:31:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 ************************************ 00:18:24.180 START TEST default_locks_via_rpc 00:18:24.180 ************************************ 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2664458 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2664458 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2664458 ']' 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:24.180 16:31:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 [2024-07-22 16:31:43.630493] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:24.180 [2024-07-22 16:31:43.630568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664458 ] 00:18:24.180 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.180 [2024-07-22 16:31:43.696132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.180 [2024-07-22 16:31:43.783585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2664458 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2664458 00:18:24.438 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2664458 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2664458 ']' 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2664458 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2664458 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2664458' 00:18:25.005 killing process with pid 2664458 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2664458 00:18:25.005 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2664458 00:18:25.263 00:18:25.263 real 0m1.216s 00:18:25.263 user 0m1.157s 00:18:25.263 sys 0m0.519s 00:18:25.263 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:25.263 16:31:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.263 ************************************ 00:18:25.263 END TEST default_locks_via_rpc 00:18:25.263 ************************************ 00:18:25.263 16:31:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:18:25.263 16:31:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:25.263 16:31:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:25.263 16:31:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:25.263 ************************************ 00:18:25.263 START TEST non_locking_app_on_locked_coremask 00:18:25.263 ************************************ 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2664621 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2664621 /var/tmp/spdk.sock 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2664621 ']' 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.263 16:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:25.263 [2024-07-22 16:31:44.899370] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:25.263 [2024-07-22 16:31:44.899469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664621 ] 00:18:25.521 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.521 [2024-07-22 16:31:44.975446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.521 [2024-07-22 16:31:45.069533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2664745 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2664745 /var/tmp/spdk2.sock 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2664745 ']' 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:25.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.779 16:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:25.779 [2024-07-22 16:31:45.379810] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:25.779 [2024-07-22 16:31:45.379911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664745 ] 00:18:25.779 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.037 [2024-07-22 16:31:45.490212] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:26.037 [2024-07-22 16:31:45.490260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.037 [2024-07-22 16:31:45.679350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.971 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.971 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:26.971 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2664621 00:18:26.971 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2664621 00:18:26.971 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:27.228 lslocks: write error 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2664621 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2664621 ']' 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2664621 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2664621 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2664621' 00:18:27.228 killing process with pid 2664621 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2664621 00:18:27.228 16:31:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2664621 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2664745 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2664745 ']' 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2664745 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2664745 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2664745' 00:18:28.161 killing process with pid 2664745 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2664745 00:18:28.161 16:31:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2664745 00:18:28.727 00:18:28.727 real 0m3.234s 00:18:28.727 user 0m3.378s 00:18:28.727 sys 0m1.081s 00:18:28.727 16:31:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:28.727 16:31:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.727 ************************************ 00:18:28.727 END TEST non_locking_app_on_locked_coremask 00:18:28.727 ************************************ 00:18:28.727 16:31:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:28.727 16:31:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:28.727 16:31:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:28.727 16:31:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:28.727 ************************************ 00:18:28.727 START TEST locking_app_on_unlocked_coremask 00:18:28.727 ************************************ 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2665054 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2665054 /var/tmp/spdk.sock 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665054 ']' 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.727 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.727 [2024-07-22 16:31:48.175255] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:28.727 [2024-07-22 16:31:48.175344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665054 ] 00:18:28.727 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.727 [2024-07-22 16:31:48.246396] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:28.727 [2024-07-22 16:31:48.246442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.727 [2024-07-22 16:31:48.340349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2665179 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2665179 /var/tmp/spdk2.sock 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665179 ']' 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:28.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.986 16:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:29.244 [2024-07-22 16:31:48.643825] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:29.244 [2024-07-22 16:31:48.643916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665179 ] 00:18:29.244 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.244 [2024-07-22 16:31:48.744025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.502 [2024-07-22 16:31:48.923958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.067 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.067 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:30.067 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2665179 00:18:30.067 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2665179 00:18:30.067 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:30.633 lslocks: write error 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2665054 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2665054 ']' 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2665054 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.633 16:31:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665054 00:18:30.633 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:30.633 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:30.633 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665054' 00:18:30.633 killing process with pid 2665054 00:18:30.633 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2665054 00:18:30.633 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2665054 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2665179 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2665179 ']' 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2665179 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665179 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665179' 00:18:31.198 killing process with pid 2665179 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2665179 00:18:31.198 16:31:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2665179 00:18:31.764 00:18:31.764 real 0m3.095s 00:18:31.764 user 0m3.283s 00:18:31.764 sys 0m1.043s 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 ************************************ 00:18:31.764 END TEST locking_app_on_unlocked_coremask 00:18:31.764 ************************************ 00:18:31.764 16:31:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:31.764 16:31:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:31.764 16:31:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:31.764 16:31:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 ************************************ 00:18:31.764 START TEST locking_app_on_locked_coremask 00:18:31.764 ************************************ 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2665483 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2665483 /var/tmp/spdk.sock 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665483 ']' 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.764 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:31.764 [2024-07-22 16:31:51.325110] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:31.764 [2024-07-22 16:31:51.325185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665483 ] 00:18:31.764 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.764 [2024-07-22 16:31:51.391283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.023 [2024-07-22 16:31:51.485603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2665555 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2665555 /var/tmp/spdk2.sock 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2665555 /var/tmp/spdk2.sock 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2665555 /var/tmp/spdk2.sock 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665555 ']' 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:32.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.281 16:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:32.281 [2024-07-22 16:31:51.784920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:32.281 [2024-07-22 16:31:51.785026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665555 ] 00:18:32.281 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.281 [2024-07-22 16:31:51.896357] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2665483 has claimed it. 00:18:32.281 [2024-07-22 16:31:51.896419] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:33.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2665555) - No such process 00:18:33.214 ERROR: process (pid: 2665555) is no longer running 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2665483 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2665483 00:18:33.214 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:33.474 lslocks: write error 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2665483 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2665483 ']' 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2665483 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665483 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665483' 00:18:33.474 killing process with pid 2665483 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2665483 00:18:33.474 16:31:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2665483 00:18:33.732 00:18:33.732 real 0m2.066s 00:18:33.732 user 0m2.257s 00:18:33.732 sys 0m0.690s 00:18:33.732 16:31:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.732 16:31:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:33.732 ************************************ 00:18:33.732 END TEST locking_app_on_locked_coremask 00:18:33.732 ************************************ 00:18:33.732 16:31:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:33.732 16:31:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:33.732 16:31:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.732 16:31:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 ************************************ 00:18:33.990 START TEST locking_overlapped_coremask 00:18:33.990 ************************************ 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2665783 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2665783 /var/tmp/spdk.sock 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665783 ']' 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.990 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:33.990 [2024-07-22 16:31:53.437181] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:33.990 [2024-07-22 16:31:53.437272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665783 ] 00:18:33.990 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.990 [2024-07-22 16:31:53.509651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.990 [2024-07-22 16:31:53.601203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.990 [2024-07-22 16:31:53.601268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.990 [2024-07-22 16:31:53.601286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2665797 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2665797 /var/tmp/spdk2.sock 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2665797 /var/tmp/spdk2.sock 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2665797 /var/tmp/spdk2.sock 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2665797 ']' 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:34.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.248 16:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:34.248 [2024-07-22 16:31:53.888943] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:34.248 [2024-07-22 16:31:53.889066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665797 ] 00:18:34.505 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.505 [2024-07-22 16:31:53.990519] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2665783 has claimed it. 00:18:34.505 [2024-07-22 16:31:53.990576] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:35.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2665797) - No such process 00:18:35.070 ERROR: process (pid: 2665797) is no longer running 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2665783 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2665783 ']' 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2665783 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665783 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665783' 00:18:35.070 killing process with pid 2665783 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2665783 00:18:35.070 16:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2665783 00:18:35.634 00:18:35.634 real 0m1.620s 00:18:35.634 user 0m4.358s 00:18:35.634 sys 0m0.460s 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:35.634 ************************************ 00:18:35.634 END TEST locking_overlapped_coremask 00:18:35.634 ************************************ 00:18:35.634 16:31:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:35.634 16:31:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:35.634 16:31:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:35.634 16:31:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:35.634 ************************************ 00:18:35.634 START TEST locking_overlapped_coremask_via_rpc 00:18:35.634 ************************************ 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2665983 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2665983 /var/tmp/spdk.sock 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2665983 ']' 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.634 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.634 [2024-07-22 16:31:55.109782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:35.634 [2024-07-22 16:31:55.109863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665983 ] 00:18:35.634 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.634 [2024-07-22 16:31:55.176739] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:35.634 [2024-07-22 16:31:55.176776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.634 [2024-07-22 16:31:55.267467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.634 [2024-07-22 16:31:55.267525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.634 [2024-07-22 16:31:55.267528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2666086 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2666086 /var/tmp/spdk2.sock 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2666086 ']' 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:35.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:35.892 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.893 16:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.151 [2024-07-22 16:31:55.578385] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:36.151 [2024-07-22 16:31:55.578482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666086 ] 00:18:36.151 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.151 [2024-07-22 16:31:55.678947] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:36.151 [2024-07-22 16:31:55.678999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.409 [2024-07-22 16:31:55.854548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.409 [2024-07-22 16:31:55.858035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:36.409 [2024-07-22 16:31:55.858037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.974 [2024-07-22 16:31:56.534068] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2665983 has claimed it. 00:18:36.974 request: 00:18:36.974 { 00:18:36.974 "method": "framework_enable_cpumask_locks", 00:18:36.974 "req_id": 1 00:18:36.974 } 00:18:36.974 Got JSON-RPC error response 00:18:36.974 response: 00:18:36.974 { 00:18:36.974 "code": -32603, 00:18:36.974 "message": "Failed to claim CPU core: 2" 00:18:36.974 } 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2665983 /var/tmp/spdk.sock 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2665983 ']' 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.974 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2666086 /var/tmp/spdk2.sock 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2666086 ']' 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:37.231 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:37.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:37.232 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:37.232 16:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:37.489 00:18:37.489 real 0m1.976s 00:18:37.489 user 0m1.022s 00:18:37.489 sys 0m0.177s 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.489 16:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.489 ************************************ 00:18:37.489 END TEST locking_overlapped_coremask_via_rpc 00:18:37.489 ************************************ 00:18:37.489 16:31:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:18:37.489 16:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2665983 ]] 00:18:37.489 16:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2665983 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2665983 ']' 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2665983 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665983 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665983' 00:18:37.489 killing process with pid 2665983 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2665983 00:18:37.489 16:31:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2665983 00:18:38.054 16:31:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2666086 ]] 00:18:38.054 16:31:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2666086 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2666086 ']' 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2666086 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666086 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666086' 00:18:38.054 killing process with pid 2666086 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2666086 00:18:38.054 16:31:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2666086 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2665983 ]] 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2665983 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2665983 ']' 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2665983 00:18:38.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2665983) - No such process 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2665983 is not found' 00:18:38.312 Process with pid 2665983 is not found 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2666086 ]] 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2666086 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2666086 ']' 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2666086 00:18:38.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2666086) - No such process 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2666086 is not found' 00:18:38.312 Process with pid 2666086 is not found 00:18:38.312 16:31:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:38.312 00:18:38.312 real 0m15.812s 00:18:38.312 user 0m27.320s 00:18:38.312 sys 0m5.482s 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.312 16:31:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:38.312 ************************************ 00:18:38.312 END TEST cpu_locks 00:18:38.312 ************************************ 00:18:38.312 00:18:38.312 real 0m41.617s 00:18:38.312 user 1m18.607s 00:18:38.312 sys 0m9.560s 00:18:38.312 16:31:57 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.312 16:31:57 event -- common/autotest_common.sh@10 -- # set +x 00:18:38.312 ************************************ 00:18:38.312 END TEST event 00:18:38.312 ************************************ 00:18:38.312 16:31:57 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:18:38.312 16:31:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:38.312 16:31:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.312 16:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:38.570 ************************************ 00:18:38.570 START TEST thread 00:18:38.570 ************************************ 00:18:38.570 16:31:57 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:18:38.570 * Looking for test storage... 00:18:38.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:18:38.570 16:31:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:38.570 16:31:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:18:38.571 16:31:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.571 16:31:58 thread -- common/autotest_common.sh@10 -- # set +x 00:18:38.571 ************************************ 00:18:38.571 START TEST thread_poller_perf 00:18:38.571 ************************************ 00:18:38.571 16:31:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:38.571 [2024-07-22 16:31:58.051475] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:38.571 [2024-07-22 16:31:58.051532] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666451 ] 00:18:38.571 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.571 [2024-07-22 16:31:58.118057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.571 [2024-07-22 16:31:58.203443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.571 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:39.943 ====================================== 00:18:39.943 busy:2711838479 (cyc) 00:18:39.943 total_run_count: 301000 00:18:39.943 tsc_hz: 2700000000 (cyc) 00:18:39.943 ====================================== 00:18:39.943 poller_cost: 9009 (cyc), 3336 (nsec) 00:18:39.943 00:18:39.943 real 0m1.246s 00:18:39.943 user 0m1.159s 00:18:39.943 sys 0m0.082s 00:18:39.943 16:31:59 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.943 16:31:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:39.943 ************************************ 00:18:39.943 END TEST thread_poller_perf 00:18:39.943 ************************************ 00:18:39.943 16:31:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:39.943 16:31:59 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:18:39.943 16:31:59 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.943 16:31:59 thread -- common/autotest_common.sh@10 -- # set +x 00:18:39.943 ************************************ 00:18:39.943 START TEST thread_poller_perf 00:18:39.943 ************************************ 00:18:39.943 16:31:59 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:39.943 [2024-07-22 16:31:59.348788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:39.943 [2024-07-22 16:31:59.348853] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666608 ] 00:18:39.943 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.943 [2024-07-22 16:31:59.424690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.943 [2024-07-22 16:31:59.515628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.943 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:41.318 ====================================== 00:18:41.318 busy:2702769966 (cyc) 00:18:41.318 total_run_count: 3856000 00:18:41.318 tsc_hz: 2700000000 (cyc) 00:18:41.318 ====================================== 00:18:41.318 poller_cost: 700 (cyc), 259 (nsec) 00:18:41.318 00:18:41.318 real 0m1.259s 00:18:41.318 user 0m1.157s 00:18:41.318 sys 0m0.096s 00:18:41.318 16:32:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:41.318 16:32:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:41.318 ************************************ 00:18:41.318 END TEST thread_poller_perf 00:18:41.318 ************************************ 00:18:41.318 16:32:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:41.318 00:18:41.318 real 0m2.651s 00:18:41.318 user 0m2.382s 00:18:41.318 sys 0m0.270s 00:18:41.318 16:32:00 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:41.318 16:32:00 thread -- common/autotest_common.sh@10 -- # set +x 00:18:41.318 ************************************ 00:18:41.318 END TEST thread 00:18:41.318 ************************************ 00:18:41.318 16:32:00 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:18:41.318 16:32:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:41.318 16:32:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:41.318 16:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:41.318 ************************************ 00:18:41.318 START TEST accel 00:18:41.318 ************************************ 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:18:41.318 * Looking for test storage... 00:18:41.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:18:41.318 16:32:00 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:18:41.318 16:32:00 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:18:41.318 16:32:00 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:41.318 16:32:00 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2666817 00:18:41.318 16:32:00 accel -- accel/accel.sh@63 -- # waitforlisten 2666817 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@827 -- # '[' -z 2666817 ']' 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.318 16:32:00 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:18:41.318 16:32:00 accel -- accel/accel.sh@61 -- # build_accel_config 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.318 16:32:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.318 16:32:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.318 16:32:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:41.318 16:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:18:41.318 16:32:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:41.318 16:32:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:41.318 16:32:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:18:41.318 16:32:00 accel -- accel/accel.sh@41 -- # jq -r . 00:18:41.318 [2024-07-22 16:32:00.773068] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:41.318 [2024-07-22 16:32:00.773152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666817 ] 00:18:41.318 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.318 [2024-07-22 16:32:00.849474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.318 [2024-07-22 16:32:00.939442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.577 16:32:01 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.577 16:32:01 accel -- common/autotest_common.sh@860 -- # return 0 00:18:41.577 16:32:01 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:18:41.577 16:32:01 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:18:41.577 16:32:01 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:18:41.577 16:32:01 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:18:41.577 16:32:01 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:18:41.577 16:32:01 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:18:41.577 16:32:01 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.577 16:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:18:41.577 16:32:01 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:18:41.577 16:32:01 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.835 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.835 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.835 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # IFS== 00:18:41.836 16:32:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:18:41.836 16:32:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:41.836 16:32:01 accel -- accel/accel.sh@75 -- # killprocess 2666817 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@946 -- # '[' -z 2666817 ']' 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@950 -- # kill -0 2666817 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@951 -- # uname 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666817 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666817' 00:18:41.836 killing process with pid 2666817 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@965 -- # kill 2666817 00:18:41.836 16:32:01 accel -- common/autotest_common.sh@970 -- # wait 2666817 00:18:42.095 16:32:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:18:42.095 16:32:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:18:42.095 16:32:01 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:18:42.095 16:32:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:18:42.095 16:32:01 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.095 16:32:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:18:42.095 16:32:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.095 16:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:18:42.352 ************************************ 00:18:42.352 START TEST accel_missing_filename 00:18:42.352 ************************************ 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.352 16:32:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:42.352 16:32:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:18:42.353 16:32:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:18:42.353 [2024-07-22 16:32:01.783918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:42.353 [2024-07-22 16:32:01.784000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666979 ] 00:18:42.353 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.353 [2024-07-22 16:32:01.854072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.353 [2024-07-22 16:32:01.945605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.611 [2024-07-22 16:32:02.006991] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:42.611 [2024-07-22 16:32:02.090797] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:18:42.611 A filename is required. 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.611 00:18:42.611 real 0m0.407s 00:18:42.611 user 0m0.285s 00:18:42.611 sys 0m0.150s 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.611 16:32:02 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:18:42.611 ************************************ 00:18:42.611 END TEST accel_missing_filename 00:18:42.611 ************************************ 00:18:42.611 16:32:02 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:18:42.611 16:32:02 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:18:42.611 16:32:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.611 16:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:18:42.611 ************************************ 00:18:42.611 START TEST accel_compress_verify 00:18:42.611 ************************************ 00:18:42.611 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:18:42.611 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:18:42.611 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:18:42.611 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:18:42.612 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.612 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:18:42.612 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.612 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:18:42.612 16:32:02 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:18:42.612 [2024-07-22 16:32:02.245062] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:42.612 [2024-07-22 16:32:02.245125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667116 ] 00:18:42.871 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.871 [2024-07-22 16:32:02.319480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.871 [2024-07-22 16:32:02.410421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.871 [2024-07-22 16:32:02.471701] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:43.130 [2024-07-22 16:32:02.557750] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:18:43.130 00:18:43.130 Compression does not support the verify option, aborting. 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.130 00:18:43.130 real 0m0.415s 00:18:43.130 user 0m0.293s 00:18:43.130 sys 0m0.156s 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.130 16:32:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:18:43.130 ************************************ 00:18:43.130 END TEST accel_compress_verify 00:18:43.130 ************************************ 00:18:43.130 16:32:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:18:43.130 16:32:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:43.130 16:32:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.130 16:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:18:43.130 ************************************ 00:18:43.130 START TEST accel_wrong_workload 00:18:43.130 ************************************ 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.130 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:18:43.130 16:32:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:18:43.130 Unsupported workload type: foobar 00:18:43.130 [2024-07-22 16:32:02.703230] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:18:43.130 accel_perf options: 00:18:43.130 [-h help message] 00:18:43.130 [-q queue depth per core] 00:18:43.130 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:43.130 [-T number of threads per core 00:18:43.131 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:43.131 [-t time in seconds] 00:18:43.131 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:43.131 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:18:43.131 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:43.131 [-l for compress/decompress workloads, name of uncompressed input file 00:18:43.131 [-S for crc32c workload, use this seed value (default 0) 00:18:43.131 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:43.131 [-f for fill workload, use this BYTE value (default 255) 00:18:43.131 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:43.131 [-y verify result if this switch is on] 00:18:43.131 [-a tasks to allocate per core (default: same value as -q)] 00:18:43.131 Can be used to spread operations across a wider range of memory. 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.131 00:18:43.131 real 0m0.022s 00:18:43.131 user 0m0.013s 00:18:43.131 sys 0m0.009s 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.131 16:32:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:18:43.131 ************************************ 00:18:43.131 END TEST accel_wrong_workload 00:18:43.131 ************************************ 00:18:43.131 16:32:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:18:43.131 16:32:02 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:18:43.131 16:32:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.131 16:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:18:43.131 Error: writing output failed: Broken pipe 00:18:43.131 ************************************ 00:18:43.131 START TEST accel_negative_buffers 00:18:43.131 ************************************ 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:18:43.131 16:32:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:18:43.131 -x option must be non-negative. 00:18:43.131 [2024-07-22 16:32:02.765904] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:18:43.131 accel_perf options: 00:18:43.131 [-h help message] 00:18:43.131 [-q queue depth per core] 00:18:43.131 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:43.131 [-T number of threads per core 00:18:43.131 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:43.131 [-t time in seconds] 00:18:43.131 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:43.131 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:18:43.131 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:43.131 [-l for compress/decompress workloads, name of uncompressed input file 00:18:43.131 [-S for crc32c workload, use this seed value (default 0) 00:18:43.131 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:43.131 [-f for fill workload, use this BYTE value (default 255) 00:18:43.131 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:43.131 [-y verify result if this switch is on] 00:18:43.131 [-a tasks to allocate per core (default: same value as -q)] 00:18:43.131 Can be used to spread operations across a wider range of memory. 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.131 00:18:43.131 real 0m0.022s 00:18:43.131 user 0m0.015s 00:18:43.131 sys 0m0.007s 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.131 16:32:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:18:43.131 ************************************ 00:18:43.131 END TEST accel_negative_buffers 00:18:43.131 ************************************ 00:18:43.390 Error: writing output failed: Broken pipe 00:18:43.390 16:32:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:18:43.390 16:32:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:18:43.390 16:32:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.390 16:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:18:43.390 ************************************ 00:18:43.390 START TEST accel_crc32c 00:18:43.390 ************************************ 00:18:43.390 16:32:02 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:18:43.390 16:32:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:18:43.390 [2024-07-22 16:32:02.830928] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:43.390 [2024-07-22 16:32:02.831003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667186 ] 00:18:43.390 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.390 [2024-07-22 16:32:02.903774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.390 [2024-07-22 16:32:02.995278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:18:43.647 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:43.648 16:32:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:44.582 16:32:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:44.582 00:18:44.582 real 0m1.402s 00:18:44.582 user 0m1.254s 00:18:44.582 sys 0m0.151s 00:18:44.582 16:32:04 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:44.582 16:32:04 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 ************************************ 00:18:44.582 END TEST accel_crc32c 00:18:44.582 ************************************ 00:18:44.841 16:32:04 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:18:44.841 16:32:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:18:44.841 16:32:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:44.841 16:32:04 accel -- common/autotest_common.sh@10 -- # set +x 00:18:44.841 ************************************ 00:18:44.841 START TEST accel_crc32c_C2 00:18:44.841 ************************************ 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:18:44.841 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:18:44.841 [2024-07-22 16:32:04.282153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.841 [2024-07-22 16:32:04.282215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667457 ] 00:18:44.841 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.841 [2024-07-22 16:32:04.355703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.841 [2024-07-22 16:32:04.447502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:45.100 16:32:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.031 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:46.288 00:18:46.288 real 0m1.423s 00:18:46.288 user 0m1.266s 00:18:46.288 sys 0m0.159s 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.288 16:32:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:18:46.288 ************************************ 00:18:46.288 END TEST accel_crc32c_C2 00:18:46.288 ************************************ 00:18:46.288 16:32:05 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:18:46.288 16:32:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:46.288 16:32:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:46.288 16:32:05 accel -- common/autotest_common.sh@10 -- # set +x 00:18:46.288 ************************************ 00:18:46.288 START TEST accel_copy 00:18:46.288 ************************************ 00:18:46.288 16:32:05 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:18:46.288 16:32:05 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:18:46.288 [2024-07-22 16:32:05.753417] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:46.288 [2024-07-22 16:32:05.753482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667616 ] 00:18:46.288 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.288 [2024-07-22 16:32:05.825040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.288 [2024-07-22 16:32:05.918151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:46.546 16:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:18:47.917 16:32:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:47.917 00:18:47.917 real 0m1.420s 00:18:47.917 user 0m1.267s 00:18:47.917 sys 0m0.156s 00:18:47.917 16:32:07 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:47.917 16:32:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:18:47.917 ************************************ 00:18:47.917 END TEST accel_copy 00:18:47.917 ************************************ 00:18:47.917 16:32:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:47.917 16:32:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:18:47.917 16:32:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.917 16:32:07 accel -- common/autotest_common.sh@10 -- # set +x 00:18:47.917 ************************************ 00:18:47.917 START TEST accel_fill 00:18:47.917 ************************************ 00:18:47.917 16:32:07 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:18:47.917 [2024-07-22 16:32:07.226218] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:47.917 [2024-07-22 16:32:07.226286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667767 ] 00:18:47.917 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.917 [2024-07-22 16:32:07.299567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.917 [2024-07-22 16:32:07.390286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.917 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:47.918 16:32:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.289 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:18:49.290 16:32:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:49.290 00:18:49.290 real 0m1.410s 00:18:49.290 user 0m1.250s 00:18:49.290 sys 0m0.163s 00:18:49.290 16:32:08 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:49.290 16:32:08 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:18:49.290 ************************************ 00:18:49.290 END TEST accel_fill 00:18:49.290 ************************************ 00:18:49.290 16:32:08 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:18:49.290 16:32:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:49.290 16:32:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:49.290 16:32:08 accel -- common/autotest_common.sh@10 -- # set +x 00:18:49.290 ************************************ 00:18:49.290 START TEST accel_copy_crc32c 00:18:49.290 ************************************ 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:18:49.290 [2024-07-22 16:32:08.680483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:49.290 [2024-07-22 16:32:08.680548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668050 ] 00:18:49.290 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.290 [2024-07-22 16:32:08.749267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.290 [2024-07-22 16:32:08.842376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:49.290 16:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:50.663 00:18:50.663 real 0m1.415s 00:18:50.663 user 0m1.270s 00:18:50.663 sys 0m0.148s 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:50.663 16:32:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:18:50.663 ************************************ 00:18:50.663 END TEST accel_copy_crc32c 00:18:50.663 ************************************ 00:18:50.663 16:32:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:18:50.663 16:32:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:18:50.663 16:32:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:50.663 16:32:10 accel -- common/autotest_common.sh@10 -- # set +x 00:18:50.663 ************************************ 00:18:50.663 START TEST accel_copy_crc32c_C2 00:18:50.663 ************************************ 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:18:50.663 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:18:50.663 [2024-07-22 16:32:10.141484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:50.663 [2024-07-22 16:32:10.141546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668202 ] 00:18:50.663 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.663 [2024-07-22 16:32:10.213059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.663 [2024-07-22 16:32:10.305847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:50.922 16:32:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:52.296 00:18:52.296 real 0m1.416s 00:18:52.296 user 0m1.262s 00:18:52.296 sys 0m0.157s 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:52.296 16:32:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:18:52.296 ************************************ 00:18:52.296 END TEST accel_copy_crc32c_C2 00:18:52.296 ************************************ 00:18:52.296 16:32:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:18:52.296 16:32:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:52.296 16:32:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.296 16:32:11 accel -- common/autotest_common.sh@10 -- # set +x 00:18:52.296 ************************************ 00:18:52.296 START TEST accel_dualcast 00:18:52.296 ************************************ 00:18:52.296 16:32:11 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:18:52.296 [2024-07-22 16:32:11.602025] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:52.296 [2024-07-22 16:32:11.602082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668361 ] 00:18:52.296 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.296 [2024-07-22 16:32:11.673686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.296 [2024-07-22 16:32:11.767293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.296 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:52.297 16:32:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:18:53.670 16:32:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:53.670 00:18:53.670 real 0m1.409s 00:18:53.670 user 0m1.260s 00:18:53.670 sys 0m0.152s 00:18:53.670 16:32:12 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:53.670 16:32:12 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:18:53.670 ************************************ 00:18:53.670 END TEST accel_dualcast 00:18:53.670 ************************************ 00:18:53.670 16:32:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:18:53.670 16:32:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:53.670 16:32:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:53.670 16:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:18:53.670 ************************************ 00:18:53.670 START TEST accel_compare 00:18:53.670 ************************************ 00:18:53.670 16:32:13 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:18:53.670 [2024-07-22 16:32:13.054190] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:53.670 [2024-07-22 16:32:13.054276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668516 ] 00:18:53.670 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.670 [2024-07-22 16:32:13.125258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.670 [2024-07-22 16:32:13.220843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:18:53.670 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:53.671 16:32:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:18:55.044 16:32:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:55.044 00:18:55.044 real 0m1.397s 00:18:55.044 user 0m1.241s 00:18:55.044 sys 0m0.158s 00:18:55.044 16:32:14 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:55.044 16:32:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:18:55.044 ************************************ 00:18:55.044 END TEST accel_compare 00:18:55.044 ************************************ 00:18:55.044 16:32:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:18:55.044 16:32:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:18:55.044 16:32:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:55.044 16:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:18:55.044 ************************************ 00:18:55.044 START TEST accel_xor 00:18:55.044 ************************************ 00:18:55.044 16:32:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:18:55.044 16:32:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:18:55.044 16:32:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:18:55.044 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:18:55.045 16:32:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:18:55.045 [2024-07-22 16:32:14.492388] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:55.045 [2024-07-22 16:32:14.492454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668789 ] 00:18:55.045 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.045 [2024-07-22 16:32:14.564372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.045 [2024-07-22 16:32:14.657568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:18:55.303 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:55.304 16:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:56.680 00:18:56.680 real 0m1.421s 00:18:56.680 user 0m1.268s 00:18:56.680 sys 0m0.155s 00:18:56.680 16:32:15 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.680 16:32:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:18:56.680 ************************************ 00:18:56.680 END TEST accel_xor 00:18:56.680 ************************************ 00:18:56.680 16:32:15 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:18:56.680 16:32:15 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:18:56.680 16:32:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.680 16:32:15 accel -- common/autotest_common.sh@10 -- # set +x 00:18:56.680 ************************************ 00:18:56.680 START TEST accel_xor 00:18:56.680 ************************************ 00:18:56.680 16:32:15 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:18:56.680 16:32:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:18:56.680 [2024-07-22 16:32:15.955298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:56.680 [2024-07-22 16:32:15.955367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668947 ] 00:18:56.680 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.680 [2024-07-22 16:32:16.029915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.680 [2024-07-22 16:32:16.122660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.680 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:56.681 16:32:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:18:58.054 16:32:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:58.055 16:32:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:58.055 16:32:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:58.055 00:18:58.055 real 0m1.418s 00:18:58.055 user 0m1.266s 00:18:58.055 sys 0m0.155s 00:18:58.055 16:32:17 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.055 16:32:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:18:58.055 ************************************ 00:18:58.055 END TEST accel_xor 00:18:58.055 ************************************ 00:18:58.055 16:32:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:18:58.055 16:32:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:18:58.055 16:32:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.055 16:32:17 accel -- common/autotest_common.sh@10 -- # set +x 00:18:58.055 ************************************ 00:18:58.055 START TEST accel_dif_verify 00:18:58.055 ************************************ 00:18:58.055 16:32:17 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:18:58.055 [2024-07-22 16:32:17.420345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:58.055 [2024-07-22 16:32:17.420410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669100 ] 00:18:58.055 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.055 [2024-07-22 16:32:17.491503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.055 [2024-07-22 16:32:17.584572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:58.055 16:32:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:18:59.429 16:32:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:59.429 00:18:59.429 real 0m1.415s 00:18:59.429 user 0m1.260s 00:18:59.429 sys 0m0.160s 00:18:59.429 16:32:18 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.429 16:32:18 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:18:59.429 ************************************ 00:18:59.429 END TEST accel_dif_verify 00:18:59.429 ************************************ 00:18:59.429 16:32:18 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:18:59.429 16:32:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:18:59.429 16:32:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:59.429 16:32:18 accel -- common/autotest_common.sh@10 -- # set +x 00:18:59.429 ************************************ 00:18:59.429 START TEST accel_dif_generate 00:18:59.429 ************************************ 00:18:59.429 16:32:18 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:18:59.429 16:32:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:18:59.429 [2024-07-22 16:32:18.882095] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:59.429 [2024-07-22 16:32:18.882156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669377 ] 00:18:59.429 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.429 [2024-07-22 16:32:18.955598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.429 [2024-07-22 16:32:19.051572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:18:59.688 16:32:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.061 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:19:01.062 16:32:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:01.062 00:19:01.062 real 0m1.422s 00:19:01.062 user 0m1.271s 00:19:01.062 sys 0m0.155s 00:19:01.062 16:32:20 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:01.062 16:32:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:19:01.062 ************************************ 00:19:01.062 END TEST accel_dif_generate 00:19:01.062 ************************************ 00:19:01.062 16:32:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:19:01.062 16:32:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:19:01.062 16:32:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:01.062 16:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:19:01.062 ************************************ 00:19:01.062 START TEST accel_dif_generate_copy 00:19:01.062 ************************************ 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:19:01.062 [2024-07-22 16:32:20.352915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:01.062 [2024-07-22 16:32:20.352990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669535 ] 00:19:01.062 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.062 [2024-07-22 16:32:20.426289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.062 [2024-07-22 16:32:20.521689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:01.062 16:32:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:02.436 00:19:02.436 real 0m1.427s 00:19:02.436 user 0m1.277s 00:19:02.436 sys 0m0.153s 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:02.436 16:32:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:19:02.436 ************************************ 00:19:02.436 END TEST accel_dif_generate_copy 00:19:02.436 ************************************ 00:19:02.436 16:32:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:19:02.436 16:32:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:02.436 16:32:21 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:19:02.436 16:32:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:02.436 16:32:21 accel -- common/autotest_common.sh@10 -- # set +x 00:19:02.436 ************************************ 00:19:02.436 START TEST accel_comp 00:19:02.436 ************************************ 00:19:02.436 16:32:21 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:19:02.436 16:32:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:19:02.436 [2024-07-22 16:32:21.823426] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:02.436 [2024-07-22 16:32:21.823490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669692 ] 00:19:02.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.436 [2024-07-22 16:32:21.900091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.436 [2024-07-22 16:32:21.996192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:02.436 16:32:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:19:03.809 16:32:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:03.809 00:19:03.809 real 0m1.433s 00:19:03.809 user 0m1.269s 00:19:03.809 sys 0m0.168s 00:19:03.809 16:32:23 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:03.809 16:32:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:19:03.809 ************************************ 00:19:03.809 END TEST accel_comp 00:19:03.809 ************************************ 00:19:03.809 16:32:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:19:03.809 16:32:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:19:03.809 16:32:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:03.809 16:32:23 accel -- common/autotest_common.sh@10 -- # set +x 00:19:03.809 ************************************ 00:19:03.809 START TEST accel_decomp 00:19:03.809 ************************************ 00:19:03.809 16:32:23 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:19:03.809 16:32:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:19:03.809 [2024-07-22 16:32:23.301354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:03.809 [2024-07-22 16:32:23.301418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669866 ] 00:19:03.809 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.809 [2024-07-22 16:32:23.372180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.067 [2024-07-22 16:32:23.468867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.067 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:04.068 16:32:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:05.441 16:32:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:05.441 00:19:05.441 real 0m1.421s 00:19:05.441 user 0m1.276s 00:19:05.441 sys 0m0.149s 00:19:05.441 16:32:24 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:05.441 16:32:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:19:05.441 ************************************ 00:19:05.441 END TEST accel_decomp 00:19:05.441 ************************************ 00:19:05.441 16:32:24 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:19:05.441 16:32:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:19:05.441 16:32:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:05.441 16:32:24 accel -- common/autotest_common.sh@10 -- # set +x 00:19:05.441 ************************************ 00:19:05.442 START TEST accel_decmop_full 00:19:05.442 ************************************ 00:19:05.442 16:32:24 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:19:05.442 16:32:24 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:19:05.442 [2024-07-22 16:32:24.768050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:05.442 [2024-07-22 16:32:24.768109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670122 ] 00:19:05.442 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.442 [2024-07-22 16:32:24.841049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.442 [2024-07-22 16:32:24.938286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:05.442 16:32:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:06.815 16:32:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:06.815 00:19:06.815 real 0m1.426s 00:19:06.815 user 0m1.280s 00:19:06.815 sys 0m0.147s 00:19:06.815 16:32:26 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:06.815 16:32:26 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:19:06.815 ************************************ 00:19:06.815 END TEST accel_decmop_full 00:19:06.815 ************************************ 00:19:06.815 16:32:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:19:06.815 16:32:26 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:19:06.815 16:32:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.815 16:32:26 accel -- common/autotest_common.sh@10 -- # set +x 00:19:06.815 ************************************ 00:19:06.815 START TEST accel_decomp_mcore 00:19:06.815 ************************************ 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:19:06.815 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:19:06.816 [2024-07-22 16:32:26.240611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:06.816 [2024-07-22 16:32:26.240678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670280 ] 00:19:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.816 [2024-07-22 16:32:26.312212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.816 [2024-07-22 16:32:26.411576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.816 [2024-07-22 16:32:26.411627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.816 [2024-07-22 16:32:26.411678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.816 [2024-07-22 16:32:26.411682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.074 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:07.075 16:32:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.009 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:08.268 00:19:08.268 real 0m1.439s 00:19:08.268 user 0m4.753s 00:19:08.268 sys 0m0.166s 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.268 16:32:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:19:08.268 ************************************ 00:19:08.268 END TEST accel_decomp_mcore 00:19:08.268 ************************************ 00:19:08.268 16:32:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:19:08.268 16:32:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:19:08.268 16:32:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.268 16:32:27 accel -- common/autotest_common.sh@10 -- # set +x 00:19:08.268 ************************************ 00:19:08.268 START TEST accel_decomp_full_mcore 00:19:08.268 ************************************ 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:19:08.268 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:19:08.268 [2024-07-22 16:32:27.731334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:08.268 [2024-07-22 16:32:27.731395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670434 ] 00:19:08.268 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.268 [2024-07-22 16:32:27.806465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.268 [2024-07-22 16:32:27.903599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.268 [2024-07-22 16:32:27.903651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.268 [2024-07-22 16:32:27.903702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.268 [2024-07-22 16:32:27.903705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:19:08.527 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:08.528 16:32:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:09.902 00:19:09.902 real 0m1.440s 00:19:09.902 user 0m4.753s 00:19:09.902 sys 0m0.169s 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.902 16:32:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:19:09.902 ************************************ 00:19:09.902 END TEST accel_decomp_full_mcore 00:19:09.902 ************************************ 00:19:09.902 16:32:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:19:09.902 16:32:29 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:19:09.902 16:32:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.902 16:32:29 accel -- common/autotest_common.sh@10 -- # set +x 00:19:09.902 ************************************ 00:19:09.902 START TEST accel_decomp_mthread 00:19:09.903 ************************************ 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:19:09.903 [2024-07-22 16:32:29.219812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:09.903 [2024-07-22 16:32:29.219876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670716 ] 00:19:09.903 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.903 [2024-07-22 16:32:29.292214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.903 [2024-07-22 16:32:29.387884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:09.903 16:32:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:19:11.277 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:11.278 00:19:11.278 real 0m1.434s 00:19:11.278 user 0m1.278s 00:19:11.278 sys 0m0.160s 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:11.278 16:32:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:19:11.278 ************************************ 00:19:11.278 END TEST accel_decomp_mthread 00:19:11.278 ************************************ 00:19:11.278 16:32:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:19:11.278 16:32:30 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:19:11.278 16:32:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:11.278 16:32:30 accel -- common/autotest_common.sh@10 -- # set +x 00:19:11.278 ************************************ 00:19:11.278 START TEST accel_decomp_full_mthread 00:19:11.278 ************************************ 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:19:11.278 [2024-07-22 16:32:30.699726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:11.278 [2024-07-22 16:32:30.699779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670871 ] 00:19:11.278 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.278 [2024-07-22 16:32:30.769169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.278 [2024-07-22 16:32:30.862821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.278 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.536 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:11.537 16:32:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:12.911 00:19:12.911 real 0m1.453s 00:19:12.911 user 0m1.307s 00:19:12.911 sys 0m0.149s 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:12.911 16:32:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:19:12.911 ************************************ 00:19:12.911 END TEST accel_decomp_full_mthread 00:19:12.911 ************************************ 00:19:12.911 16:32:32 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:19:12.911 16:32:32 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:12.911 16:32:32 accel -- accel/accel.sh@137 -- # build_accel_config 00:19:12.911 16:32:32 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:12.911 16:32:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:12.911 16:32:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:12.911 16:32:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:12.911 16:32:32 accel -- common/autotest_common.sh@10 -- # set +x 00:19:12.911 16:32:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:12.911 16:32:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:12.911 16:32:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:12.911 16:32:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:19:12.911 16:32:32 accel -- accel/accel.sh@41 -- # jq -r . 00:19:12.911 ************************************ 00:19:12.911 START TEST accel_dif_functional_tests 00:19:12.911 ************************************ 00:19:12.911 16:32:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:12.911 [2024-07-22 16:32:32.226867] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:12.911 [2024-07-22 16:32:32.226925] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671032 ] 00:19:12.911 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.911 [2024-07-22 16:32:32.296894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.911 [2024-07-22 16:32:32.393993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.911 [2024-07-22 16:32:32.394047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.911 [2024-07-22 16:32:32.394050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.911 00:19:12.911 00:19:12.911 CUnit - A unit testing framework for C - Version 2.1-3 00:19:12.911 http://cunit.sourceforge.net/ 00:19:12.911 00:19:12.911 00:19:12.911 Suite: accel_dif 00:19:12.911 Test: verify: DIF generated, GUARD check ...passed 00:19:12.911 Test: verify: DIF generated, APPTAG check ...passed 00:19:12.911 Test: verify: DIF generated, REFTAG check ...passed 00:19:12.911 Test: verify: DIF not generated, GUARD check ...[2024-07-22 16:32:32.489737] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:12.911 passed 00:19:12.911 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 16:32:32.489812] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:12.911 passed 00:19:12.911 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 16:32:32.489849] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:12.911 passed 00:19:12.911 Test: verify: APPTAG correct, APPTAG check ...passed 00:19:12.911 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 16:32:32.489937] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:19:12.911 passed 00:19:12.912 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:19:12.912 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:19:12.912 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:19:12.912 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 16:32:32.490111] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:19:12.912 passed 00:19:12.912 Test: verify copy: DIF generated, GUARD check ...passed 00:19:12.912 Test: verify copy: DIF generated, APPTAG check ...passed 00:19:12.912 Test: verify copy: DIF generated, REFTAG check ...passed 00:19:12.912 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 16:32:32.490285] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:12.912 passed 00:19:12.912 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 16:32:32.490328] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:12.912 passed 00:19:12.912 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 16:32:32.490369] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:12.912 passed 00:19:12.912 Test: generate copy: DIF generated, GUARD check ...passed 00:19:12.912 Test: generate copy: DIF generated, APTTAG check ...passed 00:19:12.912 Test: generate copy: DIF generated, REFTAG check ...passed 00:19:12.912 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:19:12.912 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:19:12.912 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:19:12.912 Test: generate copy: iovecs-len validate ...[2024-07-22 16:32:32.490623] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:19:12.912 passed 00:19:12.912 Test: generate copy: buffer alignment validate ...passed 00:19:12.912 00:19:12.912 Run Summary: Type Total Ran Passed Failed Inactive 00:19:12.912 suites 1 1 n/a 0 0 00:19:12.912 tests 26 26 26 0 0 00:19:12.912 asserts 115 115 115 0 n/a 00:19:12.912 00:19:12.912 Elapsed time = 0.003 seconds 00:19:13.170 00:19:13.170 real 0m0.515s 00:19:13.170 user 0m0.784s 00:19:13.170 sys 0m0.190s 00:19:13.170 16:32:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.170 16:32:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:19:13.170 ************************************ 00:19:13.170 END TEST accel_dif_functional_tests 00:19:13.170 ************************************ 00:19:13.170 00:19:13.170 real 0m32.058s 00:19:13.170 user 0m35.240s 00:19:13.170 sys 0m4.852s 00:19:13.170 16:32:32 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.170 16:32:32 accel -- common/autotest_common.sh@10 -- # set +x 00:19:13.170 ************************************ 00:19:13.170 END TEST accel 00:19:13.170 ************************************ 00:19:13.170 16:32:32 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:19:13.170 16:32:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:13.170 16:32:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:13.170 16:32:32 -- common/autotest_common.sh@10 -- # set +x 00:19:13.170 ************************************ 00:19:13.170 START TEST accel_rpc 00:19:13.170 ************************************ 00:19:13.170 16:32:32 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:19:13.428 * Looking for test storage... 00:19:13.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:19:13.428 16:32:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:13.428 16:32:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2671219 00:19:13.428 16:32:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:13.428 16:32:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2671219 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2671219 ']' 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.428 16:32:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.428 [2024-07-22 16:32:32.880168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:13.428 [2024-07-22 16:32:32.880269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671219 ] 00:19:13.428 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.428 [2024-07-22 16:32:32.952157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.428 [2024-07-22 16:32:33.044595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.428 16:32:33 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.428 16:32:33 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:19:13.428 16:32:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:19:13.428 16:32:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:19:13.428 16:32:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:19:13.428 16:32:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:19:13.428 16:32:33 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:19:13.428 16:32:33 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:13.428 16:32:33 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:13.428 16:32:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.687 ************************************ 00:19:13.687 START TEST accel_assign_opcode 00:19:13.687 ************************************ 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:19:13.687 [2024-07-22 16:32:33.109173] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:19:13.687 [2024-07-22 16:32:33.117181] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.687 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:19:13.945 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.945 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.946 software 00:19:13.946 00:19:13.946 real 0m0.297s 00:19:13.946 user 0m0.039s 00:19:13.946 sys 0m0.008s 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.946 16:32:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:19:13.946 ************************************ 00:19:13.946 END TEST accel_assign_opcode 00:19:13.946 ************************************ 00:19:13.946 16:32:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2671219 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2671219 ']' 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2671219 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2671219 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2671219' 00:19:13.946 killing process with pid 2671219 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@965 -- # kill 2671219 00:19:13.946 16:32:33 accel_rpc -- common/autotest_common.sh@970 -- # wait 2671219 00:19:14.204 00:19:14.204 real 0m1.075s 00:19:14.204 user 0m0.991s 00:19:14.204 sys 0m0.437s 00:19:14.204 16:32:33 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:14.204 16:32:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.204 ************************************ 00:19:14.204 END TEST accel_rpc 00:19:14.204 ************************************ 00:19:14.466 16:32:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:19:14.466 16:32:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:14.466 16:32:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.466 16:32:33 -- common/autotest_common.sh@10 -- # set +x 00:19:14.466 ************************************ 00:19:14.466 START TEST app_cmdline 00:19:14.466 ************************************ 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:19:14.466 * Looking for test storage... 00:19:14.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:14.466 16:32:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:14.466 16:32:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2671423 00:19:14.466 16:32:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:14.466 16:32:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2671423 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2671423 ']' 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.466 16:32:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:14.466 [2024-07-22 16:32:33.998584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:14.466 [2024-07-22 16:32:33.998662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671423 ] 00:19:14.466 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.466 [2024-07-22 16:32:34.064075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.766 [2024-07-22 16:32:34.150005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.766 16:32:34 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.766 16:32:34 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:19:14.766 16:32:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:19:15.058 { 00:19:15.058 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:19:15.059 "fields": { 00:19:15.059 "major": 24, 00:19:15.059 "minor": 5, 00:19:15.059 "patch": 1, 00:19:15.059 "suffix": "-pre", 00:19:15.059 "commit": "5fa2f5086" 00:19:15.059 } 00:19:15.059 } 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:15.059 16:32:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:15.059 16:32:34 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:15.341 request: 00:19:15.341 { 00:19:15.341 "method": "env_dpdk_get_mem_stats", 00:19:15.341 "req_id": 1 00:19:15.341 } 00:19:15.341 Got JSON-RPC error response 00:19:15.341 response: 00:19:15.341 { 00:19:15.341 "code": -32601, 00:19:15.341 "message": "Method not found" 00:19:15.341 } 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:15.341 16:32:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2671423 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2671423 ']' 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2671423 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2671423 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2671423' 00:19:15.341 killing process with pid 2671423 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@965 -- # kill 2671423 00:19:15.341 16:32:34 app_cmdline -- common/autotest_common.sh@970 -- # wait 2671423 00:19:15.951 00:19:15.951 real 0m1.495s 00:19:15.951 user 0m1.838s 00:19:15.951 sys 0m0.454s 00:19:15.951 16:32:35 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:15.951 16:32:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:15.951 ************************************ 00:19:15.951 END TEST app_cmdline 00:19:15.951 ************************************ 00:19:15.951 16:32:35 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:19:15.951 16:32:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:15.951 16:32:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:15.951 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:15.951 ************************************ 00:19:15.951 START TEST version 00:19:15.951 ************************************ 00:19:15.951 16:32:35 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:19:15.951 * Looking for test storage... 00:19:15.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:15.951 16:32:35 version -- app/version.sh@17 -- # get_header_version major 00:19:15.951 16:32:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # cut -f2 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # tr -d '"' 00:19:15.951 16:32:35 version -- app/version.sh@17 -- # major=24 00:19:15.951 16:32:35 version -- app/version.sh@18 -- # get_header_version minor 00:19:15.951 16:32:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # cut -f2 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # tr -d '"' 00:19:15.951 16:32:35 version -- app/version.sh@18 -- # minor=5 00:19:15.951 16:32:35 version -- app/version.sh@19 -- # get_header_version patch 00:19:15.951 16:32:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # cut -f2 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # tr -d '"' 00:19:15.951 16:32:35 version -- app/version.sh@19 -- # patch=1 00:19:15.951 16:32:35 version -- app/version.sh@20 -- # get_header_version suffix 00:19:15.951 16:32:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # cut -f2 00:19:15.951 16:32:35 version -- app/version.sh@14 -- # tr -d '"' 00:19:15.951 16:32:35 version -- app/version.sh@20 -- # suffix=-pre 00:19:15.951 16:32:35 version -- app/version.sh@22 -- # version=24.5 00:19:15.951 16:32:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:19:15.951 16:32:35 version -- app/version.sh@25 -- # version=24.5.1 00:19:15.951 16:32:35 version -- app/version.sh@28 -- # version=24.5.1rc0 00:19:15.951 16:32:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:15.952 16:32:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:19:15.952 16:32:35 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:19:15.952 16:32:35 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:19:15.952 00:19:15.952 real 0m0.098s 00:19:15.952 user 0m0.056s 00:19:15.952 sys 0m0.063s 00:19:15.952 16:32:35 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:15.952 16:32:35 version -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 ************************************ 00:19:15.952 END TEST version 00:19:15.952 ************************************ 00:19:15.952 16:32:35 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@198 -- # uname -s 00:19:15.952 16:32:35 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:19:15.952 16:32:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:19:15.952 16:32:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:19:15.952 16:32:35 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:15.952 16:32:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.952 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 16:32:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:19:15.952 16:32:35 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:19:15.952 16:32:35 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:15.952 16:32:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:15.952 16:32:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:15.952 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.211 ************************************ 00:19:16.211 START TEST nvmf_tcp 00:19:16.211 ************************************ 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:16.211 * Looking for test storage... 00:19:16.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.211 16:32:35 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.211 16:32:35 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.211 16:32:35 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.211 16:32:35 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.211 16:32:35 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.211 16:32:35 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.211 16:32:35 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:19:16.211 16:32:35 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:19:16.211 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:16.211 16:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.211 ************************************ 00:19:16.211 START TEST nvmf_example 00:19:16.211 ************************************ 00:19:16.211 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:16.211 * Looking for test storage... 00:19:16.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.211 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.211 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.212 16:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:18.756 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.756 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:18.757 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:18.757 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:18.757 Found net devices under 0000:82:00.0: cvl_0_0 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:18.757 Found net devices under 0000:82:00.1: cvl_0_1 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.757 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:19.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:19:19.016 00:19:19.016 --- 10.0.0.2 ping statistics --- 00:19:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.016 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:19.016 00:19:19.016 --- 10.0.0.1 ping statistics --- 00:19:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.016 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2673747 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2673747 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2673747 ']' 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.016 16:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.016 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.950 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:19.951 16:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:19.951 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.163 Initializing NVMe Controllers 00:19:32.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:32.163 Initialization complete. Launching workers. 00:19:32.163 ======================================================== 00:19:32.163 Latency(us) 00:19:32.163 Device Information : IOPS MiB/s Average min max 00:19:32.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14645.31 57.21 4369.52 755.41 18468.29 00:19:32.163 ======================================================== 00:19:32.163 Total : 14645.31 57.21 4369.52 755.41 18468.29 00:19:32.163 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.163 rmmod nvme_tcp 00:19:32.163 rmmod nvme_fabrics 00:19:32.163 rmmod nvme_keyring 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2673747 ']' 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2673747 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2673747 ']' 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2673747 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2673747 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2673747' 00:19:32.163 killing process with pid 2673747 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2673747 00:19:32.163 16:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2673747 00:19:32.163 nvmf threads initialize successfully 00:19:32.163 bdev subsystem init successfully 00:19:32.163 created a nvmf target service 00:19:32.163 create targets's poll groups done 00:19:32.163 all subsystems of target started 00:19:32.163 nvmf target is running 00:19:32.163 all subsystems of target stopped 00:19:32.163 destroy targets's poll groups done 00:19:32.163 destroyed the nvmf target service 00:19:32.163 bdev subsystem finish successfully 00:19:32.163 nvmf threads destroy successfully 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.163 16:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:32.732 00:19:32.732 real 0m16.479s 00:19:32.732 user 0m45.128s 00:19:32.732 sys 0m3.958s 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:32.732 16:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:32.732 ************************************ 00:19:32.732 END TEST nvmf_example 00:19:32.732 ************************************ 00:19:32.732 16:32:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:32.732 16:32:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:32.732 16:32:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:32.732 16:32:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.732 ************************************ 00:19:32.732 START TEST nvmf_filesystem 00:19:32.732 ************************************ 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:32.732 * Looking for test storage... 00:19:32.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:19:32.732 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:32.733 #define SPDK_CONFIG_H 00:19:32.733 #define SPDK_CONFIG_APPS 1 00:19:32.733 #define SPDK_CONFIG_ARCH native 00:19:32.733 #undef SPDK_CONFIG_ASAN 00:19:32.733 #undef SPDK_CONFIG_AVAHI 00:19:32.733 #undef SPDK_CONFIG_CET 00:19:32.733 #define SPDK_CONFIG_COVERAGE 1 00:19:32.733 #define SPDK_CONFIG_CROSS_PREFIX 00:19:32.733 #undef SPDK_CONFIG_CRYPTO 00:19:32.733 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:32.733 #undef SPDK_CONFIG_CUSTOMOCF 00:19:32.733 #undef SPDK_CONFIG_DAOS 00:19:32.733 #define SPDK_CONFIG_DAOS_DIR 00:19:32.733 #define SPDK_CONFIG_DEBUG 1 00:19:32.733 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:32.733 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:19:32.733 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:19:32.733 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:19:32.733 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:32.733 #undef SPDK_CONFIG_DPDK_UADK 00:19:32.733 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:19:32.733 #define SPDK_CONFIG_EXAMPLES 1 00:19:32.733 #undef SPDK_CONFIG_FC 00:19:32.733 #define SPDK_CONFIG_FC_PATH 00:19:32.733 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:32.733 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:32.733 #undef SPDK_CONFIG_FUSE 00:19:32.733 #undef SPDK_CONFIG_FUZZER 00:19:32.733 #define SPDK_CONFIG_FUZZER_LIB 00:19:32.733 #undef SPDK_CONFIG_GOLANG 00:19:32.733 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:32.733 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:32.733 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:32.733 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:32.733 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:32.733 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:32.733 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:32.733 #define SPDK_CONFIG_IDXD 1 00:19:32.733 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:32.733 #undef SPDK_CONFIG_IPSEC_MB 00:19:32.733 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:32.733 #define SPDK_CONFIG_ISAL 1 00:19:32.733 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:32.733 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:32.733 #define SPDK_CONFIG_LIBDIR 00:19:32.733 #undef SPDK_CONFIG_LTO 00:19:32.733 #define SPDK_CONFIG_MAX_LCORES 00:19:32.733 #define SPDK_CONFIG_NVME_CUSE 1 00:19:32.733 #undef SPDK_CONFIG_OCF 00:19:32.733 #define SPDK_CONFIG_OCF_PATH 00:19:32.733 #define SPDK_CONFIG_OPENSSL_PATH 00:19:32.733 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:32.733 #define SPDK_CONFIG_PGO_DIR 00:19:32.733 #undef SPDK_CONFIG_PGO_USE 00:19:32.733 #define SPDK_CONFIG_PREFIX /usr/local 00:19:32.733 #undef SPDK_CONFIG_RAID5F 00:19:32.733 #undef SPDK_CONFIG_RBD 00:19:32.733 #define SPDK_CONFIG_RDMA 1 00:19:32.733 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:32.733 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:32.733 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:32.733 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:32.733 #define SPDK_CONFIG_SHARED 1 00:19:32.733 #undef SPDK_CONFIG_SMA 00:19:32.733 #define SPDK_CONFIG_TESTS 1 00:19:32.733 #undef SPDK_CONFIG_TSAN 00:19:32.733 #define SPDK_CONFIG_UBLK 1 00:19:32.733 #define SPDK_CONFIG_UBSAN 1 00:19:32.733 #undef SPDK_CONFIG_UNIT_TESTS 00:19:32.733 #undef SPDK_CONFIG_URING 00:19:32.733 #define SPDK_CONFIG_URING_PATH 00:19:32.733 #undef SPDK_CONFIG_URING_ZNS 00:19:32.733 #undef SPDK_CONFIG_USDT 00:19:32.733 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:32.733 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:32.733 #define SPDK_CONFIG_VFIO_USER 1 00:19:32.733 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:32.733 #define SPDK_CONFIG_VHOST 1 00:19:32.733 #define SPDK_CONFIG_VIRTIO 1 00:19:32.733 #undef SPDK_CONFIG_VTUNE 00:19:32.733 #define SPDK_CONFIG_VTUNE_DIR 00:19:32.733 #define SPDK_CONFIG_WERROR 1 00:19:32.733 #define SPDK_CONFIG_WPDK_DIR 00:19:32.733 #undef SPDK_CONFIG_XNVME 00:19:32.733 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.733 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:19:32.734 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:19:32.735 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2675458 ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2675458 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.u35MOj 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.u35MOj/tests/target /tmp/spdk.u35MOj 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=947712000 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4336717824 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=49092947968 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12901781504 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30992654336 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389937152 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9011200 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996594688 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=770048 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:19:32.736 * Looking for test storage... 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=49092947968 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15116374016 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.736 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.737 16:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:35.268 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:35.268 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:35.268 Found net devices under 0000:82:00.0: cvl_0_0 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.268 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:35.269 Found net devices under 0000:82:00.1: cvl_0_1 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.269 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.527 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.527 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.527 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.527 16:32:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:19:35.527 00:19:35.527 --- 10.0.0.2 ping statistics --- 00:19:35.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.527 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:35.527 00:19:35.527 --- 10.0.0.1 ping statistics --- 00:19:35.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.527 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.527 16:32:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:35.528 ************************************ 00:19:35.528 START TEST nvmf_filesystem_no_in_capsule 00:19:35.528 ************************************ 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2677481 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2677481 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2677481 ']' 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.528 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:35.528 [2024-07-22 16:32:55.139020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:35.528 [2024-07-22 16:32:55.139119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.528 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.786 [2024-07-22 16:32:55.213191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.786 [2024-07-22 16:32:55.304824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.786 [2024-07-22 16:32:55.304879] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.786 [2024-07-22 16:32:55.304891] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.786 [2024-07-22 16:32:55.304903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.786 [2024-07-22 16:32:55.304913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.786 [2024-07-22 16:32:55.305056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.786 [2024-07-22 16:32:55.305085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.786 [2024-07-22 16:32:55.305141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.786 [2024-07-22 16:32:55.305143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.786 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:35.786 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:19:35.786 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.786 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.786 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 [2024-07-22 16:32:55.445495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 [2024-07-22 16:32:55.619428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:36.045 { 00:19:36.045 "name": "Malloc1", 00:19:36.045 "aliases": [ 00:19:36.045 "4f5ecece-3e85-413b-8020-90b56cf6f04f" 00:19:36.045 ], 00:19:36.045 "product_name": "Malloc disk", 00:19:36.045 "block_size": 512, 00:19:36.045 "num_blocks": 1048576, 00:19:36.045 "uuid": "4f5ecece-3e85-413b-8020-90b56cf6f04f", 00:19:36.045 "assigned_rate_limits": { 00:19:36.045 "rw_ios_per_sec": 0, 00:19:36.045 "rw_mbytes_per_sec": 0, 00:19:36.045 "r_mbytes_per_sec": 0, 00:19:36.045 "w_mbytes_per_sec": 0 00:19:36.045 }, 00:19:36.045 "claimed": true, 00:19:36.045 "claim_type": "exclusive_write", 00:19:36.045 "zoned": false, 00:19:36.045 "supported_io_types": { 00:19:36.045 "read": true, 00:19:36.045 "write": true, 00:19:36.045 "unmap": true, 00:19:36.045 "write_zeroes": true, 00:19:36.045 "flush": true, 00:19:36.045 "reset": true, 00:19:36.045 "compare": false, 00:19:36.045 "compare_and_write": false, 00:19:36.045 "abort": true, 00:19:36.045 "nvme_admin": false, 00:19:36.045 "nvme_io": false 00:19:36.045 }, 00:19:36.045 "memory_domains": [ 00:19:36.045 { 00:19:36.045 "dma_device_id": "system", 00:19:36.045 "dma_device_type": 1 00:19:36.045 }, 00:19:36.045 { 00:19:36.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.045 "dma_device_type": 2 00:19:36.045 } 00:19:36.045 ], 00:19:36.045 "driver_specific": {} 00:19:36.045 } 00:19:36.045 ]' 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:19:36.045 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:36.303 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:19:36.303 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:19:36.304 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:19:36.304 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:36.304 16:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:36.869 16:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:36.869 16:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:19:36.869 16:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.869 16:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:36.869 16:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:38.768 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:39.026 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:19:39.592 16:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:19:40.525 16:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:19:40.525 16:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:40.525 16:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:40.525 16:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:40.525 16:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:40.525 ************************************ 00:19:40.525 START TEST filesystem_ext4 00:19:40.525 ************************************ 00:19:40.525 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:40.525 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:40.525 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:40.525 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:40.525 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:19:40.526 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:40.526 mke2fs 1.46.5 (30-Dec-2021) 00:19:40.526 Discarding device blocks: 0/522240 done 00:19:40.526 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:40.526 Filesystem UUID: c79e7da4-f565-417f-9cbc-e66e316052b1 00:19:40.526 Superblock backups stored on blocks: 00:19:40.526 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:40.526 00:19:40.526 Allocating group tables: 0/64 done 00:19:40.526 Writing inode tables: 0/64 done 00:19:41.107 Creating journal (8192 blocks): done 00:19:41.107 Writing superblocks and filesystem accounting information: 0/64 done 00:19:41.107 00:19:41.107 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:19:41.107 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2677481 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:41.366 00:19:41.366 real 0m0.920s 00:19:41.366 user 0m0.019s 00:19:41.366 sys 0m0.048s 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:19:41.366 ************************************ 00:19:41.366 END TEST filesystem_ext4 00:19:41.366 ************************************ 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:41.366 ************************************ 00:19:41.366 START TEST filesystem_btrfs 00:19:41.366 ************************************ 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:19:41.366 16:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:19:41.624 btrfs-progs v6.6.2 00:19:41.624 See https://btrfs.readthedocs.io for more information. 00:19:41.624 00:19:41.624 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:19:41.624 NOTE: several default settings have changed in version 5.15, please make sure 00:19:41.624 this does not affect your deployments: 00:19:41.624 - DUP for metadata (-m dup) 00:19:41.624 - enabled no-holes (-O no-holes) 00:19:41.624 - enabled free-space-tree (-R free-space-tree) 00:19:41.624 00:19:41.624 Label: (null) 00:19:41.624 UUID: 314825c0-5d11-4e89-a78f-9d31e89dae7e 00:19:41.624 Node size: 16384 00:19:41.624 Sector size: 4096 00:19:41.624 Filesystem size: 510.00MiB 00:19:41.625 Block group profiles: 00:19:41.625 Data: single 8.00MiB 00:19:41.625 Metadata: DUP 32.00MiB 00:19:41.625 System: DUP 8.00MiB 00:19:41.625 SSD detected: yes 00:19:41.625 Zoned device: no 00:19:41.625 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:19:41.625 Runtime features: free-space-tree 00:19:41.625 Checksum: crc32c 00:19:41.625 Number of devices: 1 00:19:41.625 Devices: 00:19:41.625 ID SIZE PATH 00:19:41.625 1 510.00MiB /dev/nvme0n1p1 00:19:41.625 00:19:41.625 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:19:41.625 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2677481 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:42.190 00:19:42.190 real 0m0.828s 00:19:42.190 user 0m0.025s 00:19:42.190 sys 0m0.102s 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:19:42.190 ************************************ 00:19:42.190 END TEST filesystem_btrfs 00:19:42.190 ************************************ 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.190 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:42.449 ************************************ 00:19:42.449 START TEST filesystem_xfs 00:19:42.449 ************************************ 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:19:42.449 16:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:19:42.449 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:19:42.449 = sectsz=512 attr=2, projid32bit=1 00:19:42.449 = crc=1 finobt=1, sparse=1, rmapbt=0 00:19:42.449 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:19:42.449 data = bsize=4096 blocks=130560, imaxpct=25 00:19:42.449 = sunit=0 swidth=0 blks 00:19:42.449 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:19:42.449 log =internal log bsize=4096 blocks=16384, version=2 00:19:42.449 = sectsz=512 sunit=0 blks, lazy-count=1 00:19:42.449 realtime =none extsz=4096 blocks=0, rtextents=0 00:19:43.382 Discarding blocks...Done. 00:19:43.382 16:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:19:43.382 16:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2677481 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:45.911 00:19:45.911 real 0m3.272s 00:19:45.911 user 0m0.008s 00:19:45.911 sys 0m0.074s 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:19:45.911 ************************************ 00:19:45.911 END TEST filesystem_xfs 00:19:45.911 ************************************ 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:45.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:19:45.911 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2677481 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2677481 ']' 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2677481 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2677481 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2677481' 00:19:45.912 killing process with pid 2677481 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2677481 00:19:45.912 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2677481 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:19:46.170 00:19:46.170 real 0m10.680s 00:19:46.170 user 0m40.881s 00:19:46.170 sys 0m1.655s 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.170 ************************************ 00:19:46.170 END TEST nvmf_filesystem_no_in_capsule 00:19:46.170 ************************************ 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:46.170 16:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.170 ************************************ 00:19:46.170 START TEST nvmf_filesystem_in_capsule 00:19:46.170 ************************************ 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2679034 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2679034 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2679034 ']' 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.429 16:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.429 [2024-07-22 16:33:05.873976] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:46.429 [2024-07-22 16:33:05.874072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.429 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.429 [2024-07-22 16:33:05.949771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.429 [2024-07-22 16:33:06.035778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.429 [2024-07-22 16:33:06.035838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.429 [2024-07-22 16:33:06.035861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.429 [2024-07-22 16:33:06.035873] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.429 [2024-07-22 16:33:06.035882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.429 [2024-07-22 16:33:06.035972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.429 [2024-07-22 16:33:06.036037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.429 [2024-07-22 16:33:06.036011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.429 [2024-07-22 16:33:06.036040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.687 [2024-07-22 16:33:06.182607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.687 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 Malloc1 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 [2024-07-22 16:33:06.369321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:46.947 { 00:19:46.947 "name": "Malloc1", 00:19:46.947 "aliases": [ 00:19:46.947 "b39c1636-e2b1-44af-90ec-0d55e488b6e7" 00:19:46.947 ], 00:19:46.947 "product_name": "Malloc disk", 00:19:46.947 "block_size": 512, 00:19:46.947 "num_blocks": 1048576, 00:19:46.947 "uuid": "b39c1636-e2b1-44af-90ec-0d55e488b6e7", 00:19:46.947 "assigned_rate_limits": { 00:19:46.947 "rw_ios_per_sec": 0, 00:19:46.947 "rw_mbytes_per_sec": 0, 00:19:46.947 "r_mbytes_per_sec": 0, 00:19:46.947 "w_mbytes_per_sec": 0 00:19:46.947 }, 00:19:46.947 "claimed": true, 00:19:46.947 "claim_type": "exclusive_write", 00:19:46.947 "zoned": false, 00:19:46.947 "supported_io_types": { 00:19:46.947 "read": true, 00:19:46.947 "write": true, 00:19:46.947 "unmap": true, 00:19:46.947 "write_zeroes": true, 00:19:46.947 "flush": true, 00:19:46.947 "reset": true, 00:19:46.947 "compare": false, 00:19:46.947 "compare_and_write": false, 00:19:46.947 "abort": true, 00:19:46.947 "nvme_admin": false, 00:19:46.947 "nvme_io": false 00:19:46.947 }, 00:19:46.947 "memory_domains": [ 00:19:46.947 { 00:19:46.947 "dma_device_id": "system", 00:19:46.947 "dma_device_type": 1 00:19:46.947 }, 00:19:46.947 { 00:19:46.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.947 "dma_device_type": 2 00:19:46.947 } 00:19:46.947 ], 00:19:46.947 "driver_specific": {} 00:19:46.947 } 00:19:46.947 ]' 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:46.947 16:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:47.514 16:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:47.514 16:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:19:47.514 16:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:47.514 16:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:47.514 16:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:50.041 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:19:50.606 16:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:51.540 ************************************ 00:19:51.540 START TEST filesystem_in_capsule_ext4 00:19:51.540 ************************************ 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:19:51.540 16:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:51.540 mke2fs 1.46.5 (30-Dec-2021) 00:19:51.540 Discarding device blocks: 0/522240 done 00:19:51.540 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:51.540 Filesystem UUID: 2da8c170-9607-4565-ac48-d99559fca80d 00:19:51.540 Superblock backups stored on blocks: 00:19:51.540 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:51.540 00:19:51.540 Allocating group tables: 0/64 done 00:19:51.540 Writing inode tables: 0/64 done 00:19:52.105 Creating journal (8192 blocks): done 00:19:52.105 Writing superblocks and filesystem accounting information: 0/64 done 00:19:52.105 00:19:52.105 16:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:19:52.105 16:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:52.669 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2679034 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:52.928 00:19:52.928 real 0m1.430s 00:19:52.928 user 0m0.015s 00:19:52.928 sys 0m0.061s 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:19:52.928 ************************************ 00:19:52.928 END TEST filesystem_in_capsule_ext4 00:19:52.928 ************************************ 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.928 ************************************ 00:19:52.928 START TEST filesystem_in_capsule_btrfs 00:19:52.928 ************************************ 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:19:52.928 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:19:53.494 btrfs-progs v6.6.2 00:19:53.494 See https://btrfs.readthedocs.io for more information. 00:19:53.494 00:19:53.494 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:19:53.494 NOTE: several default settings have changed in version 5.15, please make sure 00:19:53.494 this does not affect your deployments: 00:19:53.494 - DUP for metadata (-m dup) 00:19:53.494 - enabled no-holes (-O no-holes) 00:19:53.494 - enabled free-space-tree (-R free-space-tree) 00:19:53.494 00:19:53.494 Label: (null) 00:19:53.494 UUID: 0ffb92d2-4b2a-4b8b-8d60-ced4b479d378 00:19:53.494 Node size: 16384 00:19:53.494 Sector size: 4096 00:19:53.494 Filesystem size: 510.00MiB 00:19:53.494 Block group profiles: 00:19:53.494 Data: single 8.00MiB 00:19:53.494 Metadata: DUP 32.00MiB 00:19:53.494 System: DUP 8.00MiB 00:19:53.494 SSD detected: yes 00:19:53.494 Zoned device: no 00:19:53.494 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:19:53.494 Runtime features: free-space-tree 00:19:53.494 Checksum: crc32c 00:19:53.494 Number of devices: 1 00:19:53.494 Devices: 00:19:53.494 ID SIZE PATH 00:19:53.494 1 510.00MiB /dev/nvme0n1p1 00:19:53.494 00:19:53.494 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:19:53.494 16:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2679034 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:54.061 00:19:54.061 real 0m1.072s 00:19:54.061 user 0m0.033s 00:19:54.061 sys 0m0.107s 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:54.061 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:19:54.061 ************************************ 00:19:54.061 END TEST filesystem_in_capsule_btrfs 00:19:54.061 ************************************ 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:54.062 ************************************ 00:19:54.062 START TEST filesystem_in_capsule_xfs 00:19:54.062 ************************************ 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:19:54.062 16:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:19:54.062 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:19:54.062 = sectsz=512 attr=2, projid32bit=1 00:19:54.062 = crc=1 finobt=1, sparse=1, rmapbt=0 00:19:54.062 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:19:54.062 data = bsize=4096 blocks=130560, imaxpct=25 00:19:54.062 = sunit=0 swidth=0 blks 00:19:54.062 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:19:54.062 log =internal log bsize=4096 blocks=16384, version=2 00:19:54.062 = sectsz=512 sunit=0 blks, lazy-count=1 00:19:54.062 realtime =none extsz=4096 blocks=0, rtextents=0 00:19:55.435 Discarding blocks...Done. 00:19:55.435 16:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:19:55.435 16:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:19:57.962 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2679034 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:57.963 00:19:57.963 real 0m3.540s 00:19:57.963 user 0m0.018s 00:19:57.963 sys 0m0.061s 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:19:57.963 ************************************ 00:19:57.963 END TEST filesystem_in_capsule_xfs 00:19:57.963 ************************************ 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2679034 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2679034 ']' 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2679034 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2679034 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2679034' 00:19:57.963 killing process with pid 2679034 00:19:57.963 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2679034 00:19:58.220 16:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2679034 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:19:58.479 00:19:58.479 real 0m12.210s 00:19:58.479 user 0m46.893s 00:19:58.479 sys 0m1.806s 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:58.479 ************************************ 00:19:58.479 END TEST nvmf_filesystem_in_capsule 00:19:58.479 ************************************ 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.479 rmmod nvme_tcp 00:19:58.479 rmmod nvme_fabrics 00:19:58.479 rmmod nvme_keyring 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.479 16:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.014 16:33:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.014 00:20:01.014 real 0m27.949s 00:20:01.014 user 1m28.874s 00:20:01.014 sys 0m5.441s 00:20:01.014 16:33:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:01.014 16:33:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:01.014 ************************************ 00:20:01.014 END TEST nvmf_filesystem 00:20:01.014 ************************************ 00:20:01.014 16:33:20 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:01.014 16:33:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:01.014 16:33:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:01.014 16:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.014 ************************************ 00:20:01.014 START TEST nvmf_target_discovery 00:20:01.014 ************************************ 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:01.014 * Looking for test storage... 00:20:01.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.014 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.015 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.015 16:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.015 16:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:03.549 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:03.549 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.549 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:03.550 Found net devices under 0000:82:00.0: cvl_0_0 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:03.550 Found net devices under 0000:82:00.1: cvl_0_1 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:20:03.550 00:20:03.550 --- 10.0.0.2 ping statistics --- 00:20:03.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.550 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:20:03.550 00:20:03.550 --- 10.0.0.1 ping statistics --- 00:20:03.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.550 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:03.550 16:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2683439 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2683439 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2683439 ']' 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.550 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.550 [2024-07-22 16:33:23.046301] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:03.550 [2024-07-22 16:33:23.046384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.550 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.550 [2024-07-22 16:33:23.125449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.808 [2024-07-22 16:33:23.219777] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.808 [2024-07-22 16:33:23.219833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.808 [2024-07-22 16:33:23.219859] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.808 [2024-07-22 16:33:23.219874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.808 [2024-07-22 16:33:23.219886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.808 [2024-07-22 16:33:23.219977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.808 [2024-07-22 16:33:23.220015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.808 [2024-07-22 16:33:23.220068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.808 [2024-07-22 16:33:23.220071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.373 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.373 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:04.373 16:33:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.373 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.373 16:33:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.373 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.373 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.373 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.373 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.373 [2024-07-22 16:33:24.023093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 Null1 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 [2024-07-22 16:33:24.063398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 Null2 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 Null3 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 Null4 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.630 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:20:04.888 00:20:04.888 Discovery Log Number of Records 6, Generation counter 6 00:20:04.888 =====Discovery Log Entry 0====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: current discovery subsystem 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4420 00:20:04.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: explicit discovery connections, duplicate discovery information 00:20:04.888 sectype: none 00:20:04.888 =====Discovery Log Entry 1====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: nvme subsystem 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4420 00:20:04.888 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: none 00:20:04.888 sectype: none 00:20:04.888 =====Discovery Log Entry 2====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: nvme subsystem 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4420 00:20:04.888 subnqn: nqn.2016-06.io.spdk:cnode2 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: none 00:20:04.888 sectype: none 00:20:04.888 =====Discovery Log Entry 3====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: nvme subsystem 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4420 00:20:04.888 subnqn: nqn.2016-06.io.spdk:cnode3 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: none 00:20:04.888 sectype: none 00:20:04.888 =====Discovery Log Entry 4====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: nvme subsystem 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4420 00:20:04.888 subnqn: nqn.2016-06.io.spdk:cnode4 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: none 00:20:04.888 sectype: none 00:20:04.888 =====Discovery Log Entry 5====== 00:20:04.888 trtype: tcp 00:20:04.888 adrfam: ipv4 00:20:04.888 subtype: discovery subsystem referral 00:20:04.888 treq: not required 00:20:04.888 portid: 0 00:20:04.888 trsvcid: 4430 00:20:04.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:04.888 traddr: 10.0.0.2 00:20:04.888 eflags: none 00:20:04.888 sectype: none 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:20:04.888 Perform nvmf subsystem discovery via RPC 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 [ 00:20:04.888 { 00:20:04.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:04.888 "subtype": "Discovery", 00:20:04.888 "listen_addresses": [ 00:20:04.888 { 00:20:04.888 "trtype": "TCP", 00:20:04.888 "adrfam": "IPv4", 00:20:04.888 "traddr": "10.0.0.2", 00:20:04.888 "trsvcid": "4420" 00:20:04.888 } 00:20:04.888 ], 00:20:04.888 "allow_any_host": true, 00:20:04.888 "hosts": [] 00:20:04.888 }, 00:20:04.888 { 00:20:04.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.888 "subtype": "NVMe", 00:20:04.888 "listen_addresses": [ 00:20:04.888 { 00:20:04.888 "trtype": "TCP", 00:20:04.888 "adrfam": "IPv4", 00:20:04.888 "traddr": "10.0.0.2", 00:20:04.888 "trsvcid": "4420" 00:20:04.888 } 00:20:04.888 ], 00:20:04.888 "allow_any_host": true, 00:20:04.888 "hosts": [], 00:20:04.888 "serial_number": "SPDK00000000000001", 00:20:04.888 "model_number": "SPDK bdev Controller", 00:20:04.888 "max_namespaces": 32, 00:20:04.888 "min_cntlid": 1, 00:20:04.888 "max_cntlid": 65519, 00:20:04.888 "namespaces": [ 00:20:04.888 { 00:20:04.888 "nsid": 1, 00:20:04.888 "bdev_name": "Null1", 00:20:04.888 "name": "Null1", 00:20:04.888 "nguid": "F603CA1CA41F4585AD77F2FD6E93EDCD", 00:20:04.888 "uuid": "f603ca1c-a41f-4585-ad77-f2fd6e93edcd" 00:20:04.888 } 00:20:04.888 ] 00:20:04.888 }, 00:20:04.888 { 00:20:04.888 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.888 "subtype": "NVMe", 00:20:04.888 "listen_addresses": [ 00:20:04.888 { 00:20:04.888 "trtype": "TCP", 00:20:04.888 "adrfam": "IPv4", 00:20:04.888 "traddr": "10.0.0.2", 00:20:04.888 "trsvcid": "4420" 00:20:04.888 } 00:20:04.888 ], 00:20:04.888 "allow_any_host": true, 00:20:04.888 "hosts": [], 00:20:04.888 "serial_number": "SPDK00000000000002", 00:20:04.888 "model_number": "SPDK bdev Controller", 00:20:04.888 "max_namespaces": 32, 00:20:04.888 "min_cntlid": 1, 00:20:04.888 "max_cntlid": 65519, 00:20:04.888 "namespaces": [ 00:20:04.888 { 00:20:04.888 "nsid": 1, 00:20:04.888 "bdev_name": "Null2", 00:20:04.888 "name": "Null2", 00:20:04.888 "nguid": "E2016989CBDC4DA998A7E580A9D1977A", 00:20:04.888 "uuid": "e2016989-cbdc-4da9-98a7-e580a9d1977a" 00:20:04.888 } 00:20:04.888 ] 00:20:04.888 }, 00:20:04.888 { 00:20:04.888 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:20:04.888 "subtype": "NVMe", 00:20:04.888 "listen_addresses": [ 00:20:04.888 { 00:20:04.888 "trtype": "TCP", 00:20:04.888 "adrfam": "IPv4", 00:20:04.888 "traddr": "10.0.0.2", 00:20:04.888 "trsvcid": "4420" 00:20:04.888 } 00:20:04.888 ], 00:20:04.888 "allow_any_host": true, 00:20:04.888 "hosts": [], 00:20:04.888 "serial_number": "SPDK00000000000003", 00:20:04.888 "model_number": "SPDK bdev Controller", 00:20:04.888 "max_namespaces": 32, 00:20:04.888 "min_cntlid": 1, 00:20:04.888 "max_cntlid": 65519, 00:20:04.888 "namespaces": [ 00:20:04.888 { 00:20:04.888 "nsid": 1, 00:20:04.888 "bdev_name": "Null3", 00:20:04.888 "name": "Null3", 00:20:04.888 "nguid": "718989EC850247248343C319F8D2E20A", 00:20:04.888 "uuid": "718989ec-8502-4724-8343-c319f8d2e20a" 00:20:04.888 } 00:20:04.888 ] 00:20:04.888 }, 00:20:04.888 { 00:20:04.888 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:20:04.888 "subtype": "NVMe", 00:20:04.888 "listen_addresses": [ 00:20:04.888 { 00:20:04.888 "trtype": "TCP", 00:20:04.888 "adrfam": "IPv4", 00:20:04.888 "traddr": "10.0.0.2", 00:20:04.888 "trsvcid": "4420" 00:20:04.888 } 00:20:04.888 ], 00:20:04.888 "allow_any_host": true, 00:20:04.888 "hosts": [], 00:20:04.888 "serial_number": "SPDK00000000000004", 00:20:04.888 "model_number": "SPDK bdev Controller", 00:20:04.888 "max_namespaces": 32, 00:20:04.888 "min_cntlid": 1, 00:20:04.888 "max_cntlid": 65519, 00:20:04.888 "namespaces": [ 00:20:04.888 { 00:20:04.888 "nsid": 1, 00:20:04.888 "bdev_name": "Null4", 00:20:04.888 "name": "Null4", 00:20:04.888 "nguid": "D92159C4868B4F3F8849A85E46397C37", 00:20:04.888 "uuid": "d92159c4-868b-4f3f-8849-a85e46397c37" 00:20:04.888 } 00:20:04.888 ] 00:20:04.888 } 00:20:04.888 ] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.888 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.889 rmmod nvme_tcp 00:20:04.889 rmmod nvme_fabrics 00:20:04.889 rmmod nvme_keyring 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2683439 ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2683439 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2683439 ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2683439 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2683439 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2683439' 00:20:04.889 killing process with pid 2683439 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2683439 00:20:04.889 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2683439 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.147 16:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.678 16:33:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.678 00:20:07.678 real 0m6.579s 00:20:07.678 user 0m7.212s 00:20:07.678 sys 0m2.253s 00:20:07.678 16:33:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:07.678 16:33:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.678 ************************************ 00:20:07.678 END TEST nvmf_target_discovery 00:20:07.678 ************************************ 00:20:07.678 16:33:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:07.678 16:33:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:07.678 16:33:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:07.678 16:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.678 ************************************ 00:20:07.678 START TEST nvmf_referrals 00:20:07.678 ************************************ 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:07.678 * Looking for test storage... 00:20:07.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.678 16:33:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.679 16:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:10.209 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:10.209 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.209 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:10.210 Found net devices under 0000:82:00.0: cvl_0_0 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:10.210 Found net devices under 0000:82:00.1: cvl_0_1 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:20:10.210 00:20:10.210 --- 10.0.0.2 ping statistics --- 00:20:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.210 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:10.210 00:20:10.210 --- 10.0.0.1 ping statistics --- 00:20:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.210 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2685953 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2685953 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2685953 ']' 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:10.210 16:33:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:10.210 [2024-07-22 16:33:29.541944] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:10.210 [2024-07-22 16:33:29.542054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.210 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.210 [2024-07-22 16:33:29.620785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.210 [2024-07-22 16:33:29.714491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.210 [2024-07-22 16:33:29.714550] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.210 [2024-07-22 16:33:29.714577] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.210 [2024-07-22 16:33:29.714591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.210 [2024-07-22 16:33:29.714604] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.210 [2024-07-22 16:33:29.714685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.210 [2024-07-22 16:33:29.714737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.210 [2024-07-22 16:33:29.714788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.211 [2024-07-22 16:33:29.714791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 [2024-07-22 16:33:30.555231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 [2024-07-22 16:33:30.567432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:11.144 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:11.402 16:33:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.660 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:11.918 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:12.175 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.433 rmmod nvme_tcp 00:20:12.433 rmmod nvme_fabrics 00:20:12.433 rmmod nvme_keyring 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2685953 ']' 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2685953 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2685953 ']' 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2685953 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2685953 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2685953' 00:20:12.433 killing process with pid 2685953 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2685953 00:20:12.433 16:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2685953 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.691 16:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.223 16:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.224 00:20:15.224 real 0m7.445s 00:20:15.224 user 0m11.769s 00:20:15.224 sys 0m2.410s 00:20:15.224 16:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:15.224 16:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:15.224 ************************************ 00:20:15.224 END TEST nvmf_referrals 00:20:15.224 ************************************ 00:20:15.224 16:33:34 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:15.224 16:33:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:15.224 16:33:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:15.224 16:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.224 ************************************ 00:20:15.224 START TEST nvmf_connect_disconnect 00:20:15.224 ************************************ 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:15.224 * Looking for test storage... 00:20:15.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.224 16:33:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:17.757 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:17.757 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:17.757 Found net devices under 0000:82:00.0: cvl_0_0 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:17.757 Found net devices under 0000:82:00.1: cvl_0_1 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.757 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:20:17.758 00:20:17.758 --- 10.0.0.2 ping statistics --- 00:20:17.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.758 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:20:17.758 00:20:17.758 --- 10.0.0.1 ping statistics --- 00:20:17.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.758 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2688659 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2688659 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2688659 ']' 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.758 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 [2024-07-22 16:33:37.235819] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:17.758 [2024-07-22 16:33:37.235906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.758 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.758 [2024-07-22 16:33:37.316504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.016 [2024-07-22 16:33:37.413708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.016 [2024-07-22 16:33:37.413761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.016 [2024-07-22 16:33:37.413778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.016 [2024-07-22 16:33:37.413792] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.016 [2024-07-22 16:33:37.413804] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.016 [2024-07-22 16:33:37.413861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.016 [2024-07-22 16:33:37.413915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.016 [2024-07-22 16:33:37.413984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.016 [2024-07-22 16:33:37.413989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.016 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.016 [2024-07-22 16:33:37.572931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:18.017 [2024-07-22 16:33:37.630153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:20:18.017 16:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:20:20.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:27.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:30.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:37.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:44.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:45.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:48.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:50.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:52.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:02.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:06.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:08.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:15.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:18.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:20.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:22.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:25.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:27.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:29.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:32.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:34.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:36.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:39.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:41.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:43.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:46.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:48.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:50.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:52.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:54.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:57.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:59.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:01.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:04.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:06.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:08.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:10.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:13.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:15.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:17.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:20.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:22.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:24.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:27.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:29.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:31.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:34.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:36.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:38.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:41.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:43.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:45.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:48.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:50.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:52.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:55.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:56.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:59.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:01.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:03.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:06.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:08.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:10.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:13.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:15.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:17.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:20.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:24.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:26.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:29.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:31.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:33.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:36.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:38.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:41.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:42.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:45.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:47.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:49.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:52.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:54.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:56.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:59.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:01.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:03.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:06.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:08.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.287 rmmod nvme_tcp 00:24:08.287 rmmod nvme_fabrics 00:24:08.287 rmmod nvme_keyring 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2688659 ']' 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2688659 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2688659 ']' 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2688659 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2688659 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2688659' 00:24:08.287 killing process with pid 2688659 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2688659 00:24:08.287 16:37:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2688659 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.546 16:37:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.076 16:37:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.076 00:24:11.076 real 3m55.822s 00:24:11.076 user 14m56.120s 00:24:11.076 sys 0m34.884s 00:24:11.076 16:37:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:11.076 16:37:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:11.076 ************************************ 00:24:11.076 END TEST nvmf_connect_disconnect 00:24:11.076 ************************************ 00:24:11.076 16:37:30 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:11.076 16:37:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:11.076 16:37:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:11.076 16:37:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:11.076 ************************************ 00:24:11.076 START TEST nvmf_multitarget 00:24:11.076 ************************************ 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:11.076 * Looking for test storage... 00:24:11.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.076 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.077 16:37:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:13.603 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:13.603 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:13.603 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:13.604 Found net devices under 0000:82:00.0: cvl_0_0 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:13.604 Found net devices under 0000:82:00.1: cvl_0_1 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:24:13.604 00:24:13.604 --- 10.0.0.2 ping statistics --- 00:24:13.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.604 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:13.604 00:24:13.604 --- 10.0.0.1 ping statistics --- 00:24:13.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.604 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2719924 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2719924 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2719924 ']' 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:13.604 16:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:13.604 [2024-07-22 16:37:32.997670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:13.604 [2024-07-22 16:37:32.997762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.604 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.604 [2024-07-22 16:37:33.081108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.604 [2024-07-22 16:37:33.170282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.604 [2024-07-22 16:37:33.170335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.604 [2024-07-22 16:37:33.170349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.604 [2024-07-22 16:37:33.170359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.604 [2024-07-22 16:37:33.170369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.604 [2024-07-22 16:37:33.170451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.604 [2024-07-22 16:37:33.170514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.604 [2024-07-22 16:37:33.170583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.604 [2024-07-22 16:37:33.170585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:24:13.861 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:24:13.861 "nvmf_tgt_1" 00:24:14.119 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:24:14.119 "nvmf_tgt_2" 00:24:14.119 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:14.119 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:24:14.119 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:24:14.119 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:24:14.376 true 00:24:14.376 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:24:14.376 true 00:24:14.376 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:24:14.376 16:37:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.633 rmmod nvme_tcp 00:24:14.633 rmmod nvme_fabrics 00:24:14.633 rmmod nvme_keyring 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2719924 ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2719924 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2719924 ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2719924 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2719924 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2719924' 00:24:14.633 killing process with pid 2719924 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2719924 00:24:14.633 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2719924 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.891 16:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.791 16:37:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.791 00:24:16.791 real 0m6.231s 00:24:16.791 user 0m6.440s 00:24:16.791 sys 0m2.290s 00:24:16.791 16:37:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:16.791 16:37:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:24:16.791 ************************************ 00:24:16.791 END TEST nvmf_multitarget 00:24:16.791 ************************************ 00:24:17.049 16:37:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:24:17.049 16:37:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:17.049 16:37:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:17.049 16:37:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.049 ************************************ 00:24:17.049 START TEST nvmf_rpc 00:24:17.049 ************************************ 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:24:17.049 * Looking for test storage... 00:24:17.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.049 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.050 16:37:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:19.581 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:19.581 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:19.581 Found net devices under 0000:82:00.0: cvl_0_0 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:19.581 Found net devices under 0000:82:00.1: cvl_0_1 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.581 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:19.840 00:24:19.840 --- 10.0.0.2 ping statistics --- 00:24:19.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.840 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:24:19.840 00:24:19.840 --- 10.0.0.1 ping statistics --- 00:24:19.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.840 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2722320 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2722320 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2722320 ']' 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.840 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:19.840 [2024-07-22 16:37:39.340844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:19.840 [2024-07-22 16:37:39.340918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.840 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.840 [2024-07-22 16:37:39.424358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.099 [2024-07-22 16:37:39.522354] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.099 [2024-07-22 16:37:39.522418] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.099 [2024-07-22 16:37:39.522435] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.099 [2024-07-22 16:37:39.522449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.099 [2024-07-22 16:37:39.522461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.099 [2024-07-22 16:37:39.522530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.099 [2024-07-22 16:37:39.522559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.099 [2024-07-22 16:37:39.522608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.099 [2024-07-22 16:37:39.522611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:24:20.099 "tick_rate": 2700000000, 00:24:20.099 "poll_groups": [ 00:24:20.099 { 00:24:20.099 "name": "nvmf_tgt_poll_group_000", 00:24:20.099 "admin_qpairs": 0, 00:24:20.099 "io_qpairs": 0, 00:24:20.099 "current_admin_qpairs": 0, 00:24:20.099 "current_io_qpairs": 0, 00:24:20.099 "pending_bdev_io": 0, 00:24:20.099 "completed_nvme_io": 0, 00:24:20.099 "transports": [] 00:24:20.099 }, 00:24:20.099 { 00:24:20.099 "name": "nvmf_tgt_poll_group_001", 00:24:20.099 "admin_qpairs": 0, 00:24:20.099 "io_qpairs": 0, 00:24:20.099 "current_admin_qpairs": 0, 00:24:20.099 "current_io_qpairs": 0, 00:24:20.099 "pending_bdev_io": 0, 00:24:20.099 "completed_nvme_io": 0, 00:24:20.099 "transports": [] 00:24:20.099 }, 00:24:20.099 { 00:24:20.099 "name": "nvmf_tgt_poll_group_002", 00:24:20.099 "admin_qpairs": 0, 00:24:20.099 "io_qpairs": 0, 00:24:20.099 "current_admin_qpairs": 0, 00:24:20.099 "current_io_qpairs": 0, 00:24:20.099 "pending_bdev_io": 0, 00:24:20.099 "completed_nvme_io": 0, 00:24:20.099 "transports": [] 00:24:20.099 }, 00:24:20.099 { 00:24:20.099 "name": "nvmf_tgt_poll_group_003", 00:24:20.099 "admin_qpairs": 0, 00:24:20.099 "io_qpairs": 0, 00:24:20.099 "current_admin_qpairs": 0, 00:24:20.099 "current_io_qpairs": 0, 00:24:20.099 "pending_bdev_io": 0, 00:24:20.099 "completed_nvme_io": 0, 00:24:20.099 "transports": [] 00:24:20.099 } 00:24:20.099 ] 00:24:20.099 }' 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:24:20.099 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 [2024-07-22 16:37:39.786316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:24:20.358 "tick_rate": 2700000000, 00:24:20.358 "poll_groups": [ 00:24:20.358 { 00:24:20.358 "name": "nvmf_tgt_poll_group_000", 00:24:20.358 "admin_qpairs": 0, 00:24:20.358 "io_qpairs": 0, 00:24:20.358 "current_admin_qpairs": 0, 00:24:20.358 "current_io_qpairs": 0, 00:24:20.358 "pending_bdev_io": 0, 00:24:20.358 "completed_nvme_io": 0, 00:24:20.358 "transports": [ 00:24:20.358 { 00:24:20.358 "trtype": "TCP" 00:24:20.358 } 00:24:20.358 ] 00:24:20.358 }, 00:24:20.358 { 00:24:20.358 "name": "nvmf_tgt_poll_group_001", 00:24:20.358 "admin_qpairs": 0, 00:24:20.358 "io_qpairs": 0, 00:24:20.358 "current_admin_qpairs": 0, 00:24:20.358 "current_io_qpairs": 0, 00:24:20.358 "pending_bdev_io": 0, 00:24:20.358 "completed_nvme_io": 0, 00:24:20.358 "transports": [ 00:24:20.358 { 00:24:20.358 "trtype": "TCP" 00:24:20.358 } 00:24:20.358 ] 00:24:20.358 }, 00:24:20.358 { 00:24:20.358 "name": "nvmf_tgt_poll_group_002", 00:24:20.358 "admin_qpairs": 0, 00:24:20.358 "io_qpairs": 0, 00:24:20.358 "current_admin_qpairs": 0, 00:24:20.358 "current_io_qpairs": 0, 00:24:20.358 "pending_bdev_io": 0, 00:24:20.358 "completed_nvme_io": 0, 00:24:20.358 "transports": [ 00:24:20.358 { 00:24:20.358 "trtype": "TCP" 00:24:20.358 } 00:24:20.358 ] 00:24:20.358 }, 00:24:20.358 { 00:24:20.358 "name": "nvmf_tgt_poll_group_003", 00:24:20.358 "admin_qpairs": 0, 00:24:20.358 "io_qpairs": 0, 00:24:20.358 "current_admin_qpairs": 0, 00:24:20.358 "current_io_qpairs": 0, 00:24:20.358 "pending_bdev_io": 0, 00:24:20.358 "completed_nvme_io": 0, 00:24:20.358 "transports": [ 00:24:20.358 { 00:24:20.358 "trtype": "TCP" 00:24:20.358 } 00:24:20.358 ] 00:24:20.358 } 00:24:20.358 ] 00:24:20.358 }' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 Malloc1 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 [2024-07-22 16:37:39.948114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.358 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:24:20.359 [2024-07-22 16:37:39.970702] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:24:20.359 Failed to write to /dev/nvme-fabrics: Input/output error 00:24:20.359 could not add new controller: failed to write to nvme-fabrics device 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.359 16:37:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.359 16:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.359 16:37:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:21.312 16:37:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:24:21.312 16:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:21.312 16:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.312 16:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:21.312 16:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:23.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.210 [2024-07-22 16:37:42.719766] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:24:23.210 Failed to write to /dev/nvme-fabrics: Input/output error 00:24:23.210 could not add new controller: failed to write to nvme-fabrics device 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.210 16:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.776 16:37:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:24:23.776 16:37:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:23.776 16:37:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.776 16:37:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:23.776 16:37:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:26.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 [2024-07-22 16:37:45.516389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.303 16:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:26.868 16:37:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:26.868 16:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:26.868 16:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.868 16:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:26.868 16:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:28.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.768 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.769 [2024-07-22 16:37:48.339700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.769 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:29.333 16:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:29.333 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:29.333 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.333 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:29.333 16:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:31.859 16:37:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:31.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.859 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.860 [2024-07-22 16:37:51.122296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.860 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:32.425 16:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:32.425 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:32.425 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.425 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:32.425 16:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:34.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.322 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.580 [2024-07-22 16:37:53.981446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.580 16:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.580 16:37:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.580 16:37:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:35.145 16:37:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:35.145 16:37:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.145 16:37:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.145 16:37:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.145 16:37:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:37.043 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:37.043 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:37.043 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:37.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.301 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.302 [2024-07-22 16:37:56.803867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.302 16:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:37.866 16:37:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:37.866 16:37:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.866 16:37:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.866 16:37:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.866 16:37:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:40.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 [2024-07-22 16:37:59.621946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:40.392 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 [2024-07-22 16:37:59.670062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 [2024-07-22 16:37:59.718228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 [2024-07-22 16:37:59.766384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 [2024-07-22 16:37:59.814560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.393 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:24:40.393 "tick_rate": 2700000000, 00:24:40.393 "poll_groups": [ 00:24:40.393 { 00:24:40.393 "name": "nvmf_tgt_poll_group_000", 00:24:40.393 "admin_qpairs": 2, 00:24:40.393 "io_qpairs": 84, 00:24:40.393 "current_admin_qpairs": 0, 00:24:40.393 "current_io_qpairs": 0, 00:24:40.393 "pending_bdev_io": 0, 00:24:40.393 "completed_nvme_io": 186, 00:24:40.393 "transports": [ 00:24:40.393 { 00:24:40.393 "trtype": "TCP" 00:24:40.393 } 00:24:40.394 ] 00:24:40.394 }, 00:24:40.394 { 00:24:40.394 "name": "nvmf_tgt_poll_group_001", 00:24:40.394 "admin_qpairs": 2, 00:24:40.394 "io_qpairs": 84, 00:24:40.394 "current_admin_qpairs": 0, 00:24:40.394 "current_io_qpairs": 0, 00:24:40.394 "pending_bdev_io": 0, 00:24:40.394 "completed_nvme_io": 134, 00:24:40.394 "transports": [ 00:24:40.394 { 00:24:40.394 "trtype": "TCP" 00:24:40.394 } 00:24:40.394 ] 00:24:40.394 }, 00:24:40.394 { 00:24:40.394 "name": "nvmf_tgt_poll_group_002", 00:24:40.394 "admin_qpairs": 1, 00:24:40.394 "io_qpairs": 84, 00:24:40.394 "current_admin_qpairs": 0, 00:24:40.394 "current_io_qpairs": 0, 00:24:40.394 "pending_bdev_io": 0, 00:24:40.394 "completed_nvme_io": 243, 00:24:40.394 "transports": [ 00:24:40.394 { 00:24:40.394 "trtype": "TCP" 00:24:40.394 } 00:24:40.394 ] 00:24:40.394 }, 00:24:40.394 { 00:24:40.394 "name": "nvmf_tgt_poll_group_003", 00:24:40.394 "admin_qpairs": 2, 00:24:40.394 "io_qpairs": 84, 00:24:40.394 "current_admin_qpairs": 0, 00:24:40.394 "current_io_qpairs": 0, 00:24:40.394 "pending_bdev_io": 0, 00:24:40.394 "completed_nvme_io": 123, 00:24:40.394 "transports": [ 00:24:40.394 { 00:24:40.394 "trtype": "TCP" 00:24:40.394 } 00:24:40.394 ] 00:24:40.394 } 00:24:40.394 ] 00:24:40.394 }' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.394 rmmod nvme_tcp 00:24:40.394 rmmod nvme_fabrics 00:24:40.394 rmmod nvme_keyring 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2722320 ']' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2722320 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2722320 ']' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2722320 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.394 16:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2722320 00:24:40.394 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:40.394 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:40.394 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2722320' 00:24:40.394 killing process with pid 2722320 00:24:40.394 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2722320 00:24:40.394 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2722320 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.653 16:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.185 16:38:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.185 00:24:43.185 real 0m25.848s 00:24:43.185 user 1m22.456s 00:24:43.185 sys 0m4.493s 00:24:43.185 16:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:43.185 16:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:43.185 ************************************ 00:24:43.185 END TEST nvmf_rpc 00:24:43.185 ************************************ 00:24:43.185 16:38:02 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:24:43.185 16:38:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:43.185 16:38:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:43.185 16:38:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.185 ************************************ 00:24:43.185 START TEST nvmf_invalid 00:24:43.185 ************************************ 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:24:43.185 * Looking for test storage... 00:24:43.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.185 16:38:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.186 16:38:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.726 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:45.727 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:45.727 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:45.727 Found net devices under 0000:82:00.0: cvl_0_0 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:45.727 Found net devices under 0000:82:00.1: cvl_0_1 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.727 16:38:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:24:45.727 00:24:45.727 --- 10.0.0.2 ping statistics --- 00:24:45.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.727 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:24:45.727 00:24:45.727 --- 10.0.0.1 ping statistics --- 00:24:45.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.727 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2727210 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2727210 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2727210 ']' 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.727 16:38:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:45.727 [2024-07-22 16:38:05.166889] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:45.727 [2024-07-22 16:38:05.166997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.727 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.727 [2024-07-22 16:38:05.248202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.728 [2024-07-22 16:38:05.343915] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.728 [2024-07-22 16:38:05.343986] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.728 [2024-07-22 16:38:05.344004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.728 [2024-07-22 16:38:05.344026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.728 [2024-07-22 16:38:05.344038] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.728 [2024-07-22 16:38:05.344108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.728 [2024-07-22 16:38:05.347988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.728 [2024-07-22 16:38:05.348026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.728 [2024-07-22 16:38:05.348031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:46.659 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5851 00:24:46.916 [2024-07-22 16:38:06.447906] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:24:46.916 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:24:46.916 { 00:24:46.916 "nqn": "nqn.2016-06.io.spdk:cnode5851", 00:24:46.916 "tgt_name": "foobar", 00:24:46.916 "method": "nvmf_create_subsystem", 00:24:46.916 "req_id": 1 00:24:46.916 } 00:24:46.916 Got JSON-RPC error response 00:24:46.916 response: 00:24:46.916 { 00:24:46.916 "code": -32603, 00:24:46.916 "message": "Unable to find target foobar" 00:24:46.916 }' 00:24:46.916 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:24:46.916 { 00:24:46.916 "nqn": "nqn.2016-06.io.spdk:cnode5851", 00:24:46.916 "tgt_name": "foobar", 00:24:46.916 "method": "nvmf_create_subsystem", 00:24:46.916 "req_id": 1 00:24:46.916 } 00:24:46.916 Got JSON-RPC error response 00:24:46.916 response: 00:24:46.916 { 00:24:46.916 "code": -32603, 00:24:46.916 "message": "Unable to find target foobar" 00:24:46.916 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:24:46.916 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:24:46.916 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9390 00:24:47.173 [2024-07-22 16:38:06.748927] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9390: invalid serial number 'SPDKISFASTANDAWESOME' 00:24:47.173 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:24:47.173 { 00:24:47.173 "nqn": "nqn.2016-06.io.spdk:cnode9390", 00:24:47.173 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:24:47.173 "method": "nvmf_create_subsystem", 00:24:47.173 "req_id": 1 00:24:47.173 } 00:24:47.173 Got JSON-RPC error response 00:24:47.173 response: 00:24:47.173 { 00:24:47.173 "code": -32602, 00:24:47.173 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:24:47.173 }' 00:24:47.173 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:24:47.173 { 00:24:47.173 "nqn": "nqn.2016-06.io.spdk:cnode9390", 00:24:47.173 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:24:47.173 "method": "nvmf_create_subsystem", 00:24:47.173 "req_id": 1 00:24:47.173 } 00:24:47.173 Got JSON-RPC error response 00:24:47.173 response: 00:24:47.173 { 00:24:47.173 "code": -32602, 00:24:47.173 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:24:47.173 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:24:47.173 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:24:47.173 16:38:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2720 00:24:47.431 [2024-07-22 16:38:07.013803] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2720: invalid model number 'SPDK_Controller' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:24:47.431 { 00:24:47.431 "nqn": "nqn.2016-06.io.spdk:cnode2720", 00:24:47.431 "model_number": "SPDK_Controller\u001f", 00:24:47.431 "method": "nvmf_create_subsystem", 00:24:47.431 "req_id": 1 00:24:47.431 } 00:24:47.431 Got JSON-RPC error response 00:24:47.431 response: 00:24:47.431 { 00:24:47.431 "code": -32602, 00:24:47.431 "message": "Invalid MN SPDK_Controller\u001f" 00:24:47.431 }' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:24:47.431 { 00:24:47.431 "nqn": "nqn.2016-06.io.spdk:cnode2720", 00:24:47.431 "model_number": "SPDK_Controller\u001f", 00:24:47.431 "method": "nvmf_create_subsystem", 00:24:47.431 "req_id": 1 00:24:47.431 } 00:24:47.431 Got JSON-RPC error response 00:24:47.431 response: 00:24:47.431 { 00:24:47.431 "code": -32602, 00:24:47.431 "message": "Invalid MN SPDK_Controller\u001f" 00:24:47.431 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:24:47.431 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:24:47.432 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'XqB]?Y"}+2>NuGo5c^b =' 00:24:47.689 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XqB]?Y"}+2>NuGo5c^b =' nqn.2016-06.io.spdk:cnode26927 00:24:47.949 [2024-07-22 16:38:07.387047] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26927: invalid serial number 'XqB]?Y"}+2>NuGo5c^b =' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:24:47.949 { 00:24:47.949 "nqn": "nqn.2016-06.io.spdk:cnode26927", 00:24:47.949 "serial_number": "XqB]?Y\"}+2>NuGo5c^b =", 00:24:47.949 "method": "nvmf_create_subsystem", 00:24:47.949 "req_id": 1 00:24:47.949 } 00:24:47.949 Got JSON-RPC error response 00:24:47.949 response: 00:24:47.949 { 00:24:47.949 "code": -32602, 00:24:47.949 "message": "Invalid SN XqB]?Y\"}+2>NuGo5c^b =" 00:24:47.949 }' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:24:47.949 { 00:24:47.949 "nqn": "nqn.2016-06.io.spdk:cnode26927", 00:24:47.949 "serial_number": "XqB]?Y\"}+2>NuGo5c^b =", 00:24:47.949 "method": "nvmf_create_subsystem", 00:24:47.949 "req_id": 1 00:24:47.949 } 00:24:47.949 Got JSON-RPC error response 00:24:47.949 response: 00:24:47.949 { 00:24:47.949 "code": -32602, 00:24:47.949 "message": "Invalid SN XqB]?Y\"}+2>NuGo5c^b =" 00:24:47.949 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:24:47.949 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:24:47.950 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 's(+V`w2"gKLy68+%'\''#.!8S2qs=c,JE,l~P!IR0@nz' 00:24:47.951 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 's(+V`w2"gKLy68+%'\''#.!8S2qs=c,JE,l~P!IR0@nz' nqn.2016-06.io.spdk:cnode18309 00:24:48.210 [2024-07-22 16:38:07.776290] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18309: invalid model number 's(+V`w2"gKLy68+%'#.!8S2qs=c,JE,l~P!IR0@nz' 00:24:48.210 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:24:48.210 { 00:24:48.210 "nqn": "nqn.2016-06.io.spdk:cnode18309", 00:24:48.210 "model_number": "s(+V`w2\"gKLy68+%'\''#.!8S2qs=c,JE,l~P!IR0@nz", 00:24:48.210 "method": "nvmf_create_subsystem", 00:24:48.210 "req_id": 1 00:24:48.210 } 00:24:48.210 Got JSON-RPC error response 00:24:48.210 response: 00:24:48.210 { 00:24:48.210 "code": -32602, 00:24:48.210 "message": "Invalid MN s(+V`w2\"gKLy68+%'\''#.!8S2qs=c,JE,l~P!IR0@nz" 00:24:48.210 }' 00:24:48.210 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:24:48.210 { 00:24:48.210 "nqn": "nqn.2016-06.io.spdk:cnode18309", 00:24:48.210 "model_number": "s(+V`w2\"gKLy68+%'#.!8S2qs=c,JE,l~P!IR0@nz", 00:24:48.210 "method": "nvmf_create_subsystem", 00:24:48.210 "req_id": 1 00:24:48.210 } 00:24:48.210 Got JSON-RPC error response 00:24:48.210 response: 00:24:48.210 { 00:24:48.210 "code": -32602, 00:24:48.210 "message": "Invalid MN s(+V`w2\"gKLy68+%'#.!8S2qs=c,JE,l~P!IR0@nz" 00:24:48.210 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:24:48.210 16:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:24:48.489 [2024-07-22 16:38:08.029182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.489 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:24:48.746 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:24:48.746 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:24:48.746 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:24:48.746 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:24:48.746 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:24:49.003 [2024-07-22 16:38:08.546824] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:24:49.004 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:24:49.004 { 00:24:49.004 "nqn": "nqn.2016-06.io.spdk:cnode", 00:24:49.004 "listen_address": { 00:24:49.004 "trtype": "tcp", 00:24:49.004 "traddr": "", 00:24:49.004 "trsvcid": "4421" 00:24:49.004 }, 00:24:49.004 "method": "nvmf_subsystem_remove_listener", 00:24:49.004 "req_id": 1 00:24:49.004 } 00:24:49.004 Got JSON-RPC error response 00:24:49.004 response: 00:24:49.004 { 00:24:49.004 "code": -32602, 00:24:49.004 "message": "Invalid parameters" 00:24:49.004 }' 00:24:49.004 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:24:49.004 { 00:24:49.004 "nqn": "nqn.2016-06.io.spdk:cnode", 00:24:49.004 "listen_address": { 00:24:49.004 "trtype": "tcp", 00:24:49.004 "traddr": "", 00:24:49.004 "trsvcid": "4421" 00:24:49.004 }, 00:24:49.004 "method": "nvmf_subsystem_remove_listener", 00:24:49.004 "req_id": 1 00:24:49.004 } 00:24:49.004 Got JSON-RPC error response 00:24:49.004 response: 00:24:49.004 { 00:24:49.004 "code": -32602, 00:24:49.004 "message": "Invalid parameters" 00:24:49.004 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:24:49.004 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23324 -i 0 00:24:49.261 [2024-07-22 16:38:08.791585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23324: invalid cntlid range [0-65519] 00:24:49.261 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:24:49.261 { 00:24:49.261 "nqn": "nqn.2016-06.io.spdk:cnode23324", 00:24:49.261 "min_cntlid": 0, 00:24:49.261 "method": "nvmf_create_subsystem", 00:24:49.261 "req_id": 1 00:24:49.261 } 00:24:49.261 Got JSON-RPC error response 00:24:49.261 response: 00:24:49.261 { 00:24:49.261 "code": -32602, 00:24:49.261 "message": "Invalid cntlid range [0-65519]" 00:24:49.261 }' 00:24:49.261 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:24:49.261 { 00:24:49.261 "nqn": "nqn.2016-06.io.spdk:cnode23324", 00:24:49.261 "min_cntlid": 0, 00:24:49.261 "method": "nvmf_create_subsystem", 00:24:49.261 "req_id": 1 00:24:49.261 } 00:24:49.261 Got JSON-RPC error response 00:24:49.261 response: 00:24:49.261 { 00:24:49.261 "code": -32602, 00:24:49.261 "message": "Invalid cntlid range [0-65519]" 00:24:49.261 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:49.261 16:38:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29152 -i 65520 00:24:49.519 [2024-07-22 16:38:09.052475] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29152: invalid cntlid range [65520-65519] 00:24:49.519 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:24:49.519 { 00:24:49.519 "nqn": "nqn.2016-06.io.spdk:cnode29152", 00:24:49.519 "min_cntlid": 65520, 00:24:49.519 "method": "nvmf_create_subsystem", 00:24:49.519 "req_id": 1 00:24:49.519 } 00:24:49.519 Got JSON-RPC error response 00:24:49.519 response: 00:24:49.519 { 00:24:49.519 "code": -32602, 00:24:49.519 "message": "Invalid cntlid range [65520-65519]" 00:24:49.519 }' 00:24:49.519 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:24:49.519 { 00:24:49.519 "nqn": "nqn.2016-06.io.spdk:cnode29152", 00:24:49.519 "min_cntlid": 65520, 00:24:49.519 "method": "nvmf_create_subsystem", 00:24:49.519 "req_id": 1 00:24:49.519 } 00:24:49.519 Got JSON-RPC error response 00:24:49.519 response: 00:24:49.519 { 00:24:49.519 "code": -32602, 00:24:49.519 "message": "Invalid cntlid range [65520-65519]" 00:24:49.519 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:49.519 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18656 -I 0 00:24:49.776 [2024-07-22 16:38:09.305373] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18656: invalid cntlid range [1-0] 00:24:49.777 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:24:49.777 { 00:24:49.777 "nqn": "nqn.2016-06.io.spdk:cnode18656", 00:24:49.777 "max_cntlid": 0, 00:24:49.777 "method": "nvmf_create_subsystem", 00:24:49.777 "req_id": 1 00:24:49.777 } 00:24:49.777 Got JSON-RPC error response 00:24:49.777 response: 00:24:49.777 { 00:24:49.777 "code": -32602, 00:24:49.777 "message": "Invalid cntlid range [1-0]" 00:24:49.777 }' 00:24:49.777 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:24:49.777 { 00:24:49.777 "nqn": "nqn.2016-06.io.spdk:cnode18656", 00:24:49.777 "max_cntlid": 0, 00:24:49.777 "method": "nvmf_create_subsystem", 00:24:49.777 "req_id": 1 00:24:49.777 } 00:24:49.777 Got JSON-RPC error response 00:24:49.777 response: 00:24:49.777 { 00:24:49.777 "code": -32602, 00:24:49.777 "message": "Invalid cntlid range [1-0]" 00:24:49.777 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:49.777 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16183 -I 65520 00:24:50.034 [2024-07-22 16:38:09.546145] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16183: invalid cntlid range [1-65520] 00:24:50.034 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:24:50.034 { 00:24:50.034 "nqn": "nqn.2016-06.io.spdk:cnode16183", 00:24:50.034 "max_cntlid": 65520, 00:24:50.034 "method": "nvmf_create_subsystem", 00:24:50.034 "req_id": 1 00:24:50.034 } 00:24:50.034 Got JSON-RPC error response 00:24:50.034 response: 00:24:50.034 { 00:24:50.034 "code": -32602, 00:24:50.034 "message": "Invalid cntlid range [1-65520]" 00:24:50.034 }' 00:24:50.034 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:24:50.034 { 00:24:50.034 "nqn": "nqn.2016-06.io.spdk:cnode16183", 00:24:50.034 "max_cntlid": 65520, 00:24:50.034 "method": "nvmf_create_subsystem", 00:24:50.034 "req_id": 1 00:24:50.034 } 00:24:50.034 Got JSON-RPC error response 00:24:50.034 response: 00:24:50.034 { 00:24:50.034 "code": -32602, 00:24:50.034 "message": "Invalid cntlid range [1-65520]" 00:24:50.034 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:50.034 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24157 -i 6 -I 5 00:24:50.292 [2024-07-22 16:38:09.786956] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24157: invalid cntlid range [6-5] 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:24:50.292 { 00:24:50.292 "nqn": "nqn.2016-06.io.spdk:cnode24157", 00:24:50.292 "min_cntlid": 6, 00:24:50.292 "max_cntlid": 5, 00:24:50.292 "method": "nvmf_create_subsystem", 00:24:50.292 "req_id": 1 00:24:50.292 } 00:24:50.292 Got JSON-RPC error response 00:24:50.292 response: 00:24:50.292 { 00:24:50.292 "code": -32602, 00:24:50.292 "message": "Invalid cntlid range [6-5]" 00:24:50.292 }' 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:24:50.292 { 00:24:50.292 "nqn": "nqn.2016-06.io.spdk:cnode24157", 00:24:50.292 "min_cntlid": 6, 00:24:50.292 "max_cntlid": 5, 00:24:50.292 "method": "nvmf_create_subsystem", 00:24:50.292 "req_id": 1 00:24:50.292 } 00:24:50.292 Got JSON-RPC error response 00:24:50.292 response: 00:24:50.292 { 00:24:50.292 "code": -32602, 00:24:50.292 "message": "Invalid cntlid range [6-5]" 00:24:50.292 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:24:50.292 { 00:24:50.292 "name": "foobar", 00:24:50.292 "method": "nvmf_delete_target", 00:24:50.292 "req_id": 1 00:24:50.292 } 00:24:50.292 Got JSON-RPC error response 00:24:50.292 response: 00:24:50.292 { 00:24:50.292 "code": -32602, 00:24:50.292 "message": "The specified target doesn'\''t exist, cannot delete it." 00:24:50.292 }' 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:24:50.292 { 00:24:50.292 "name": "foobar", 00:24:50.292 "method": "nvmf_delete_target", 00:24:50.292 "req_id": 1 00:24:50.292 } 00:24:50.292 Got JSON-RPC error response 00:24:50.292 response: 00:24:50.292 { 00:24:50.292 "code": -32602, 00:24:50.292 "message": "The specified target doesn't exist, cannot delete it." 00:24:50.292 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.292 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.292 rmmod nvme_tcp 00:24:50.292 rmmod nvme_fabrics 00:24:50.292 rmmod nvme_keyring 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2727210 ']' 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2727210 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2727210 ']' 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2727210 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2727210 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2727210' 00:24:50.551 killing process with pid 2727210 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2727210 00:24:50.551 16:38:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2727210 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.809 16:38:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.710 16:38:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.710 00:24:52.710 real 0m9.868s 00:24:52.710 user 0m23.456s 00:24:52.710 sys 0m2.847s 00:24:52.710 16:38:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.710 16:38:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:52.710 ************************************ 00:24:52.710 END TEST nvmf_invalid 00:24:52.710 ************************************ 00:24:52.710 16:38:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:24:52.710 16:38:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:52.710 16:38:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:52.710 16:38:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.710 ************************************ 00:24:52.710 START TEST nvmf_abort 00:24:52.710 ************************************ 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:24:52.710 * Looking for test storage... 00:24:52.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.710 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:24:52.969 16:38:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.970 16:38:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:55.748 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:55.748 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:55.748 Found net devices under 0000:82:00.0: cvl_0_0 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:55.748 Found net devices under 0000:82:00.1: cvl_0_1 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.748 16:38:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:24:55.748 00:24:55.748 --- 10.0.0.2 ping statistics --- 00:24:55.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.748 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:24:55.748 00:24:55.748 --- 10.0.0.1 ping statistics --- 00:24:55.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.748 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.748 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2730274 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2730274 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2730274 ']' 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:55.749 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.749 [2024-07-22 16:38:15.206934] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:55.749 [2024-07-22 16:38:15.207032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.749 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.749 [2024-07-22 16:38:15.284280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.749 [2024-07-22 16:38:15.372583] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.749 [2024-07-22 16:38:15.372642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.749 [2024-07-22 16:38:15.372656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.749 [2024-07-22 16:38:15.372667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.749 [2024-07-22 16:38:15.372677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.749 [2024-07-22 16:38:15.375985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.749 [2024-07-22 16:38:15.376055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.749 [2024-07-22 16:38:15.376059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.007 [2024-07-22 16:38:15.519869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.007 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 Malloc0 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 Delay0 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 [2024-07-22 16:38:15.587571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.008 16:38:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:24:56.008 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.266 [2024-07-22 16:38:15.683052] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:24:58.168 Initializing NVMe Controllers 00:24:58.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:24:58.168 controller IO queue size 128 less than required 00:24:58.168 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:24:58.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:58.168 Initialization complete. Launching workers. 00:24:58.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33151 00:24:58.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33216, failed to submit 62 00:24:58.168 success 33155, unsuccess 61, failed 0 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.168 rmmod nvme_tcp 00:24:58.168 rmmod nvme_fabrics 00:24:58.168 rmmod nvme_keyring 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2730274 ']' 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2730274 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2730274 ']' 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2730274 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:58.168 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2730274 00:24:58.427 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:58.427 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:58.427 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2730274' 00:24:58.427 killing process with pid 2730274 00:24:58.427 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2730274 00:24:58.427 16:38:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2730274 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.684 16:38:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.587 16:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.587 00:25:00.587 real 0m7.832s 00:25:00.587 user 0m10.434s 00:25:00.587 sys 0m3.032s 00:25:00.587 16:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:00.587 16:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:00.587 ************************************ 00:25:00.587 END TEST nvmf_abort 00:25:00.587 ************************************ 00:25:00.587 16:38:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:25:00.587 16:38:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:00.587 16:38:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:00.587 16:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.587 ************************************ 00:25:00.587 START TEST nvmf_ns_hotplug_stress 00:25:00.587 ************************************ 00:25:00.587 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:25:00.587 * Looking for test storage... 00:25:00.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.588 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.588 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:25:00.846 16:38:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.385 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:03.386 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:03.386 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:03.386 Found net devices under 0000:82:00.0: cvl_0_0 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:03.386 Found net devices under 0000:82:00.1: cvl_0_1 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:03.386 00:25:03.386 --- 10.0.0.2 ping statistics --- 00:25:03.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.386 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:03.386 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:03.386 00:25:03.387 --- 10.0.0.1 ping statistics --- 00:25:03.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.387 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2732899 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2732899 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2732899 ']' 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.387 16:38:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 [2024-07-22 16:38:22.798686] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:03.387 [2024-07-22 16:38:22.798772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.387 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.387 [2024-07-22 16:38:22.874043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:03.387 [2024-07-22 16:38:22.961075] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.387 [2024-07-22 16:38:22.961130] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.387 [2024-07-22 16:38:22.961143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.387 [2024-07-22 16:38:22.961154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.387 [2024-07-22 16:38:22.961164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.387 [2024-07-22 16:38:22.964985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.387 [2024-07-22 16:38:22.965053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.387 [2024-07-22 16:38:22.965057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:03.646 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.904 [2024-07-22 16:38:23.383030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.904 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:04.162 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.420 [2024-07-22 16:38:23.957769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.420 16:38:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.678 16:38:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:04.937 Malloc0 00:25:04.937 16:38:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:05.195 Delay0 00:25:05.195 16:38:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:05.452 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:05.711 NULL1 00:25:05.711 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:05.969 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2733205 00:25:05.969 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:05.969 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:05.969 16:38:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:05.969 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.344 Read completed with error (sct=0, sc=11) 00:25:07.344 16:38:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:07.344 16:38:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:07.344 16:38:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:07.602 true 00:25:07.602 16:38:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:07.602 16:38:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:08.534 16:38:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:08.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:08.792 16:38:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:08.792 16:38:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:09.050 true 00:25:09.050 16:38:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:09.050 16:38:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:09.308 16:38:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:09.566 16:38:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:09.566 16:38:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:09.824 true 00:25:09.824 16:38:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:09.824 16:38:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:10.756 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:10.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:10.756 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:10.756 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:11.013 true 00:25:11.013 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:11.013 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:11.271 16:38:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:11.528 16:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:11.528 16:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:11.786 true 00:25:11.786 16:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:11.786 16:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:12.720 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:12.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.978 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:12.978 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:12.978 true 00:25:13.235 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:13.235 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:13.235 16:38:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:13.491 16:38:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:13.491 16:38:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:13.749 true 00:25:13.749 16:38:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:13.749 16:38:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:14.681 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:14.940 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:25:14.940 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:25:15.198 true 00:25:15.198 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:15.198 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:15.455 16:38:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:15.713 16:38:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:25:15.713 16:38:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:25:15.971 true 00:25:15.971 16:38:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:15.971 16:38:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:16.904 16:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:16.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.904 16:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:25:16.904 16:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:25:17.162 true 00:25:17.162 16:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:17.162 16:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:17.419 16:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:17.677 16:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:25:17.677 16:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:25:17.935 true 00:25:17.935 16:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:17.935 16:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:18.867 16:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:19.125 16:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:25:19.125 16:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:25:19.384 true 00:25:19.384 16:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:19.384 16:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:19.642 16:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:19.900 16:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:19.900 16:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:20.156 true 00:25:20.156 16:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:20.156 16:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:21.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:21.088 16:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:21.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:21.088 16:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:21.088 16:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:21.345 true 00:25:21.345 16:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:21.345 16:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:21.602 16:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:21.860 16:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:25:21.860 16:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:25:22.117 true 00:25:22.117 16:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:22.117 16:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:23.048 16:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:23.306 16:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:25:23.306 16:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:25:23.562 true 00:25:23.562 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:23.562 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:23.818 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:24.075 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:25:24.075 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:25:24.332 true 00:25:24.332 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:24.332 16:38:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:25.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:25.265 16:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:25.523 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:25:25.523 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:25:25.780 true 00:25:25.780 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:25.780 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:26.038 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:26.296 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:25:26.296 16:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:25:26.553 true 00:25:26.553 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:26.553 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:26.811 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:27.068 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:25:27.068 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:25:27.325 true 00:25:27.325 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:27.325 16:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:28.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.258 16:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:28.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.516 16:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:25:28.516 16:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:25:28.774 true 00:25:28.774 16:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:28.774 16:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:29.706 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:29.964 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:25:29.964 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:25:29.964 true 00:25:29.964 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:29.964 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.222 16:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:30.480 16:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:25:30.480 16:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:25:30.738 true 00:25:30.738 16:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:30.738 16:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:31.672 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:31.930 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:25:31.930 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:25:32.187 true 00:25:32.187 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:32.187 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:32.445 16:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:32.703 16:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:25:32.703 16:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:25:32.961 true 00:25:32.961 16:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:32.961 16:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:33.892 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.892 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:25:33.892 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:25:34.150 true 00:25:34.150 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:34.150 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.407 16:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.665 16:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:25:34.665 16:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:25:34.922 true 00:25:34.922 16:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:34.922 16:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:35.856 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.113 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:25:36.113 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:25:36.113 Initializing NVMe Controllers 00:25:36.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.113 Controller IO queue size 128, less than required. 00:25:36.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.113 Controller IO queue size 128, less than required. 00:25:36.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:36.113 Initialization complete. Launching workers. 00:25:36.113 ======================================================== 00:25:36.113 Latency(us) 00:25:36.113 Device Information : IOPS MiB/s Average min max 00:25:36.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 842.54 0.41 84772.63 2218.99 1038108.78 00:25:36.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12117.99 5.92 10563.76 2832.11 448658.56 00:25:36.113 ======================================================== 00:25:36.113 Total : 12960.53 6.33 15387.93 2218.99 1038108.78 00:25:36.113 00:25:36.113 true 00:25:36.113 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2733205 00:25:36.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2733205) - No such process 00:25:36.114 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2733205 00:25:36.371 16:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.371 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:36.629 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:25:36.629 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:25:36.629 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:25:36.629 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:36.629 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:25:36.886 null0 00:25:36.886 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:36.886 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:36.886 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:25:37.145 null1 00:25:37.145 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:37.145 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:37.145 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:25:37.402 null2 00:25:37.402 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:37.403 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:37.403 16:38:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:25:37.660 null3 00:25:37.660 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:37.660 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:37.660 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:25:37.917 null4 00:25:37.917 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:37.917 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:37.917 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:25:38.174 null5 00:25:38.174 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:38.174 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:38.174 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:25:38.431 null6 00:25:38.431 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:38.431 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:38.431 16:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:25:38.690 null7 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2737117 2737118 2737120 2737123 2737125 2737127 2737129 2737131 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:38.690 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:38.949 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.207 16:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:39.465 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:39.723 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:39.981 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.239 16:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:40.497 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:40.755 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.013 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:41.014 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.014 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.014 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:41.272 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:41.529 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:41.530 16:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:41.788 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:42.046 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.047 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:42.304 16:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.562 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:42.563 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:42.563 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:42.563 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:42.828 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.087 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:43.345 16:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:43.603 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:43.862 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:44.120 rmmod nvme_tcp 00:25:44.120 rmmod nvme_fabrics 00:25:44.120 rmmod nvme_keyring 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2732899 ']' 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2732899 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2732899 ']' 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2732899 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:44.120 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2732899 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2732899' 00:25:44.378 killing process with pid 2732899 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2732899 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2732899 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.378 16:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.909 16:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.909 00:25:46.909 real 0m45.847s 00:25:46.909 user 3m27.366s 00:25:46.909 sys 0m16.948s 00:25:46.909 16:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:46.909 16:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:46.909 ************************************ 00:25:46.909 END TEST nvmf_ns_hotplug_stress 00:25:46.909 ************************************ 00:25:46.909 16:39:06 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:46.909 16:39:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:46.909 16:39:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:46.909 16:39:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.909 ************************************ 00:25:46.909 START TEST nvmf_connect_stress 00:25:46.909 ************************************ 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:46.909 * Looking for test storage... 00:25:46.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.909 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.910 16:39:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.439 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:49.440 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:49.440 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:49.440 Found net devices under 0000:82:00.0: cvl_0_0 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:49.440 Found net devices under 0000:82:00.1: cvl_0_1 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:49.440 00:25:49.440 --- 10.0.0.2 ping statistics --- 00:25:49.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.440 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:49.440 00:25:49.440 --- 10.0.0.1 ping statistics --- 00:25:49.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.440 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2740773 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2740773 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2740773 ']' 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.440 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:49.441 16:39:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 [2024-07-22 16:39:08.735211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:49.441 [2024-07-22 16:39:08.735293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.441 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.441 [2024-07-22 16:39:08.813905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.441 [2024-07-22 16:39:08.907311] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.441 [2024-07-22 16:39:08.907366] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.441 [2024-07-22 16:39:08.907389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.441 [2024-07-22 16:39:08.907400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.441 [2024-07-22 16:39:08.907411] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.441 [2024-07-22 16:39:08.907493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.441 [2024-07-22 16:39:08.907558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.441 [2024-07-22 16:39:08.907561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 [2024-07-22 16:39:09.048613] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 [2024-07-22 16:39:09.082115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.441 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.699 NULL1 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2740878 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.699 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.957 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.957 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:49.957 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:49.957 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.957 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:50.216 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.216 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:50.216 16:39:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:50.216 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.216 16:39:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:50.473 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.473 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:50.473 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:50.473 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.473 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:51.039 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.039 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:51.039 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:51.039 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.039 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:51.296 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.296 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:51.296 16:39:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:51.296 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.296 16:39:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:51.553 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.553 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:51.553 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:51.553 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.553 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:51.810 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.810 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:51.810 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:51.810 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.810 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:52.067 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.067 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:52.067 16:39:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:52.067 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.067 16:39:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:52.630 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.630 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:52.630 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:52.630 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.630 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:52.887 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.887 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:52.887 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:52.887 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.887 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:53.143 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.143 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:53.143 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:53.143 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.143 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:53.399 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.399 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:53.399 16:39:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:53.399 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.399 16:39:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:53.963 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.963 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:53.963 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:53.963 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.963 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:54.219 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.219 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:54.219 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:54.219 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.219 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:54.476 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.476 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:54.476 16:39:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:54.476 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.476 16:39:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:54.734 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.734 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:54.734 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:54.734 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.734 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:54.991 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.991 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:54.991 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:54.991 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.991 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:55.555 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.555 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:55.555 16:39:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:55.555 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.555 16:39:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:55.812 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.812 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:55.812 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:55.812 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.812 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.069 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:56.069 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:56.069 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.069 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.326 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:56.326 16:39:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:56.326 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.326 16:39:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:56.584 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.584 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:56.584 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:56.584 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.584 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:57.204 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.204 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:57.204 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:57.204 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.204 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:57.514 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.514 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:57.514 16:39:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:57.514 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.514 16:39:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:57.799 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.799 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:57.799 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:57.799 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.799 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:58.056 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.056 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:58.056 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:58.056 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.056 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:58.314 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.314 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:58.314 16:39:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:58.314 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.314 16:39:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:58.571 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.571 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:58.571 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:58.571 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.571 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:58.829 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.829 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:58.829 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:58.829 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.829 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:59.394 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.394 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:59.394 16:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:59.394 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.394 16:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:59.652 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:59.652 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.652 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:59.652 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2740878 00:25:59.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2740878) - No such process 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2740878 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.909 rmmod nvme_tcp 00:25:59.909 rmmod nvme_fabrics 00:25:59.909 rmmod nvme_keyring 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2740773 ']' 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2740773 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2740773 ']' 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2740773 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2740773 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2740773' 00:25:59.909 killing process with pid 2740773 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2740773 00:25:59.909 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2740773 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.167 16:39:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.699 16:39:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:02.699 00:26:02.699 real 0m15.676s 00:26:02.699 user 0m37.578s 00:26:02.699 sys 0m6.918s 00:26:02.699 16:39:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:02.699 16:39:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:02.699 ************************************ 00:26:02.699 END TEST nvmf_connect_stress 00:26:02.699 ************************************ 00:26:02.699 16:39:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:26:02.699 16:39:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:02.699 16:39:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:02.699 16:39:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.699 ************************************ 00:26:02.699 START TEST nvmf_fused_ordering 00:26:02.699 ************************************ 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:26:02.699 * Looking for test storage... 00:26:02.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.699 16:39:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.229 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:05.230 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:05.230 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:05.230 Found net devices under 0000:82:00.0: cvl_0_0 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:05.230 Found net devices under 0000:82:00.1: cvl_0_1 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:05.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:26:05.230 00:26:05.230 --- 10.0.0.2 ping statistics --- 00:26:05.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.230 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:26:05.230 00:26:05.230 --- 10.0.0.1 ping statistics --- 00:26:05.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.230 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2744478 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2744478 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2744478 ']' 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.230 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.230 [2024-07-22 16:39:24.506776] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:05.230 [2024-07-22 16:39:24.506861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.230 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.230 [2024-07-22 16:39:24.583223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.230 [2024-07-22 16:39:24.672201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.230 [2024-07-22 16:39:24.672271] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.230 [2024-07-22 16:39:24.672284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.230 [2024-07-22 16:39:24.672295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.230 [2024-07-22 16:39:24.672305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.230 [2024-07-22 16:39:24.672337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 [2024-07-22 16:39:24.818417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 [2024-07-22 16:39:24.834618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 NULL1 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.231 16:39:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.231 [2024-07-22 16:39:24.878180] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:05.231 [2024-07-22 16:39:24.878221] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744506 ] 00:26:05.489 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.055 Attached to nqn.2016-06.io.spdk:cnode1 00:26:06.055 Namespace ID: 1 size: 1GB 00:26:06.055 fused_ordering(0) 00:26:06.055 fused_ordering(1) 00:26:06.055 fused_ordering(2) 00:26:06.055 fused_ordering(3) 00:26:06.055 fused_ordering(4) 00:26:06.055 fused_ordering(5) 00:26:06.055 fused_ordering(6) 00:26:06.055 fused_ordering(7) 00:26:06.055 fused_ordering(8) 00:26:06.055 fused_ordering(9) 00:26:06.055 fused_ordering(10) 00:26:06.055 fused_ordering(11) 00:26:06.055 fused_ordering(12) 00:26:06.055 fused_ordering(13) 00:26:06.055 fused_ordering(14) 00:26:06.055 fused_ordering(15) 00:26:06.055 fused_ordering(16) 00:26:06.055 fused_ordering(17) 00:26:06.055 fused_ordering(18) 00:26:06.055 fused_ordering(19) 00:26:06.055 fused_ordering(20) 00:26:06.055 fused_ordering(21) 00:26:06.055 fused_ordering(22) 00:26:06.055 fused_ordering(23) 00:26:06.055 fused_ordering(24) 00:26:06.055 fused_ordering(25) 00:26:06.055 fused_ordering(26) 00:26:06.055 fused_ordering(27) 00:26:06.055 fused_ordering(28) 00:26:06.055 fused_ordering(29) 00:26:06.055 fused_ordering(30) 00:26:06.055 fused_ordering(31) 00:26:06.055 fused_ordering(32) 00:26:06.055 fused_ordering(33) 00:26:06.055 fused_ordering(34) 00:26:06.055 fused_ordering(35) 00:26:06.055 fused_ordering(36) 00:26:06.055 fused_ordering(37) 00:26:06.055 fused_ordering(38) 00:26:06.055 fused_ordering(39) 00:26:06.055 fused_ordering(40) 00:26:06.055 fused_ordering(41) 00:26:06.055 fused_ordering(42) 00:26:06.055 fused_ordering(43) 00:26:06.055 fused_ordering(44) 00:26:06.055 fused_ordering(45) 00:26:06.055 fused_ordering(46) 00:26:06.055 fused_ordering(47) 00:26:06.055 fused_ordering(48) 00:26:06.055 fused_ordering(49) 00:26:06.055 fused_ordering(50) 00:26:06.055 fused_ordering(51) 00:26:06.055 fused_ordering(52) 00:26:06.055 fused_ordering(53) 00:26:06.055 fused_ordering(54) 00:26:06.055 fused_ordering(55) 00:26:06.055 fused_ordering(56) 00:26:06.055 fused_ordering(57) 00:26:06.055 fused_ordering(58) 00:26:06.055 fused_ordering(59) 00:26:06.055 fused_ordering(60) 00:26:06.055 fused_ordering(61) 00:26:06.055 fused_ordering(62) 00:26:06.055 fused_ordering(63) 00:26:06.055 fused_ordering(64) 00:26:06.055 fused_ordering(65) 00:26:06.055 fused_ordering(66) 00:26:06.055 fused_ordering(67) 00:26:06.055 fused_ordering(68) 00:26:06.055 fused_ordering(69) 00:26:06.055 fused_ordering(70) 00:26:06.055 fused_ordering(71) 00:26:06.055 fused_ordering(72) 00:26:06.055 fused_ordering(73) 00:26:06.055 fused_ordering(74) 00:26:06.055 fused_ordering(75) 00:26:06.055 fused_ordering(76) 00:26:06.055 fused_ordering(77) 00:26:06.055 fused_ordering(78) 00:26:06.055 fused_ordering(79) 00:26:06.056 fused_ordering(80) 00:26:06.056 fused_ordering(81) 00:26:06.056 fused_ordering(82) 00:26:06.056 fused_ordering(83) 00:26:06.056 fused_ordering(84) 00:26:06.056 fused_ordering(85) 00:26:06.056 fused_ordering(86) 00:26:06.056 fused_ordering(87) 00:26:06.056 fused_ordering(88) 00:26:06.056 fused_ordering(89) 00:26:06.056 fused_ordering(90) 00:26:06.056 fused_ordering(91) 00:26:06.056 fused_ordering(92) 00:26:06.056 fused_ordering(93) 00:26:06.056 fused_ordering(94) 00:26:06.056 fused_ordering(95) 00:26:06.056 fused_ordering(96) 00:26:06.056 fused_ordering(97) 00:26:06.056 fused_ordering(98) 00:26:06.056 fused_ordering(99) 00:26:06.056 fused_ordering(100) 00:26:06.056 fused_ordering(101) 00:26:06.056 fused_ordering(102) 00:26:06.056 fused_ordering(103) 00:26:06.056 fused_ordering(104) 00:26:06.056 fused_ordering(105) 00:26:06.056 fused_ordering(106) 00:26:06.056 fused_ordering(107) 00:26:06.056 fused_ordering(108) 00:26:06.056 fused_ordering(109) 00:26:06.056 fused_ordering(110) 00:26:06.056 fused_ordering(111) 00:26:06.056 fused_ordering(112) 00:26:06.056 fused_ordering(113) 00:26:06.056 fused_ordering(114) 00:26:06.056 fused_ordering(115) 00:26:06.056 fused_ordering(116) 00:26:06.056 fused_ordering(117) 00:26:06.056 fused_ordering(118) 00:26:06.056 fused_ordering(119) 00:26:06.056 fused_ordering(120) 00:26:06.056 fused_ordering(121) 00:26:06.056 fused_ordering(122) 00:26:06.056 fused_ordering(123) 00:26:06.056 fused_ordering(124) 00:26:06.056 fused_ordering(125) 00:26:06.056 fused_ordering(126) 00:26:06.056 fused_ordering(127) 00:26:06.056 fused_ordering(128) 00:26:06.056 fused_ordering(129) 00:26:06.056 fused_ordering(130) 00:26:06.056 fused_ordering(131) 00:26:06.056 fused_ordering(132) 00:26:06.056 fused_ordering(133) 00:26:06.056 fused_ordering(134) 00:26:06.056 fused_ordering(135) 00:26:06.056 fused_ordering(136) 00:26:06.056 fused_ordering(137) 00:26:06.056 fused_ordering(138) 00:26:06.056 fused_ordering(139) 00:26:06.056 fused_ordering(140) 00:26:06.056 fused_ordering(141) 00:26:06.056 fused_ordering(142) 00:26:06.056 fused_ordering(143) 00:26:06.056 fused_ordering(144) 00:26:06.056 fused_ordering(145) 00:26:06.056 fused_ordering(146) 00:26:06.056 fused_ordering(147) 00:26:06.056 fused_ordering(148) 00:26:06.056 fused_ordering(149) 00:26:06.056 fused_ordering(150) 00:26:06.056 fused_ordering(151) 00:26:06.056 fused_ordering(152) 00:26:06.056 fused_ordering(153) 00:26:06.056 fused_ordering(154) 00:26:06.056 fused_ordering(155) 00:26:06.056 fused_ordering(156) 00:26:06.056 fused_ordering(157) 00:26:06.056 fused_ordering(158) 00:26:06.056 fused_ordering(159) 00:26:06.056 fused_ordering(160) 00:26:06.056 fused_ordering(161) 00:26:06.056 fused_ordering(162) 00:26:06.056 fused_ordering(163) 00:26:06.056 fused_ordering(164) 00:26:06.056 fused_ordering(165) 00:26:06.056 fused_ordering(166) 00:26:06.056 fused_ordering(167) 00:26:06.056 fused_ordering(168) 00:26:06.056 fused_ordering(169) 00:26:06.056 fused_ordering(170) 00:26:06.056 fused_ordering(171) 00:26:06.056 fused_ordering(172) 00:26:06.056 fused_ordering(173) 00:26:06.056 fused_ordering(174) 00:26:06.056 fused_ordering(175) 00:26:06.056 fused_ordering(176) 00:26:06.056 fused_ordering(177) 00:26:06.056 fused_ordering(178) 00:26:06.056 fused_ordering(179) 00:26:06.056 fused_ordering(180) 00:26:06.056 fused_ordering(181) 00:26:06.056 fused_ordering(182) 00:26:06.056 fused_ordering(183) 00:26:06.056 fused_ordering(184) 00:26:06.056 fused_ordering(185) 00:26:06.056 fused_ordering(186) 00:26:06.056 fused_ordering(187) 00:26:06.056 fused_ordering(188) 00:26:06.056 fused_ordering(189) 00:26:06.056 fused_ordering(190) 00:26:06.056 fused_ordering(191) 00:26:06.056 fused_ordering(192) 00:26:06.056 fused_ordering(193) 00:26:06.056 fused_ordering(194) 00:26:06.056 fused_ordering(195) 00:26:06.056 fused_ordering(196) 00:26:06.056 fused_ordering(197) 00:26:06.056 fused_ordering(198) 00:26:06.056 fused_ordering(199) 00:26:06.056 fused_ordering(200) 00:26:06.056 fused_ordering(201) 00:26:06.056 fused_ordering(202) 00:26:06.056 fused_ordering(203) 00:26:06.056 fused_ordering(204) 00:26:06.056 fused_ordering(205) 00:26:06.315 fused_ordering(206) 00:26:06.315 fused_ordering(207) 00:26:06.315 fused_ordering(208) 00:26:06.315 fused_ordering(209) 00:26:06.315 fused_ordering(210) 00:26:06.315 fused_ordering(211) 00:26:06.315 fused_ordering(212) 00:26:06.315 fused_ordering(213) 00:26:06.315 fused_ordering(214) 00:26:06.315 fused_ordering(215) 00:26:06.315 fused_ordering(216) 00:26:06.315 fused_ordering(217) 00:26:06.315 fused_ordering(218) 00:26:06.315 fused_ordering(219) 00:26:06.315 fused_ordering(220) 00:26:06.315 fused_ordering(221) 00:26:06.315 fused_ordering(222) 00:26:06.315 fused_ordering(223) 00:26:06.315 fused_ordering(224) 00:26:06.315 fused_ordering(225) 00:26:06.315 fused_ordering(226) 00:26:06.315 fused_ordering(227) 00:26:06.315 fused_ordering(228) 00:26:06.315 fused_ordering(229) 00:26:06.315 fused_ordering(230) 00:26:06.315 fused_ordering(231) 00:26:06.315 fused_ordering(232) 00:26:06.315 fused_ordering(233) 00:26:06.315 fused_ordering(234) 00:26:06.315 fused_ordering(235) 00:26:06.315 fused_ordering(236) 00:26:06.315 fused_ordering(237) 00:26:06.315 fused_ordering(238) 00:26:06.315 fused_ordering(239) 00:26:06.315 fused_ordering(240) 00:26:06.315 fused_ordering(241) 00:26:06.315 fused_ordering(242) 00:26:06.315 fused_ordering(243) 00:26:06.315 fused_ordering(244) 00:26:06.315 fused_ordering(245) 00:26:06.315 fused_ordering(246) 00:26:06.315 fused_ordering(247) 00:26:06.315 fused_ordering(248) 00:26:06.315 fused_ordering(249) 00:26:06.315 fused_ordering(250) 00:26:06.315 fused_ordering(251) 00:26:06.315 fused_ordering(252) 00:26:06.315 fused_ordering(253) 00:26:06.315 fused_ordering(254) 00:26:06.315 fused_ordering(255) 00:26:06.315 fused_ordering(256) 00:26:06.315 fused_ordering(257) 00:26:06.315 fused_ordering(258) 00:26:06.315 fused_ordering(259) 00:26:06.315 fused_ordering(260) 00:26:06.315 fused_ordering(261) 00:26:06.315 fused_ordering(262) 00:26:06.315 fused_ordering(263) 00:26:06.315 fused_ordering(264) 00:26:06.315 fused_ordering(265) 00:26:06.315 fused_ordering(266) 00:26:06.315 fused_ordering(267) 00:26:06.315 fused_ordering(268) 00:26:06.315 fused_ordering(269) 00:26:06.315 fused_ordering(270) 00:26:06.315 fused_ordering(271) 00:26:06.315 fused_ordering(272) 00:26:06.315 fused_ordering(273) 00:26:06.315 fused_ordering(274) 00:26:06.315 fused_ordering(275) 00:26:06.315 fused_ordering(276) 00:26:06.315 fused_ordering(277) 00:26:06.315 fused_ordering(278) 00:26:06.315 fused_ordering(279) 00:26:06.315 fused_ordering(280) 00:26:06.315 fused_ordering(281) 00:26:06.315 fused_ordering(282) 00:26:06.315 fused_ordering(283) 00:26:06.315 fused_ordering(284) 00:26:06.315 fused_ordering(285) 00:26:06.315 fused_ordering(286) 00:26:06.315 fused_ordering(287) 00:26:06.315 fused_ordering(288) 00:26:06.315 fused_ordering(289) 00:26:06.315 fused_ordering(290) 00:26:06.315 fused_ordering(291) 00:26:06.315 fused_ordering(292) 00:26:06.315 fused_ordering(293) 00:26:06.315 fused_ordering(294) 00:26:06.315 fused_ordering(295) 00:26:06.315 fused_ordering(296) 00:26:06.315 fused_ordering(297) 00:26:06.315 fused_ordering(298) 00:26:06.315 fused_ordering(299) 00:26:06.315 fused_ordering(300) 00:26:06.315 fused_ordering(301) 00:26:06.315 fused_ordering(302) 00:26:06.315 fused_ordering(303) 00:26:06.315 fused_ordering(304) 00:26:06.315 fused_ordering(305) 00:26:06.315 fused_ordering(306) 00:26:06.315 fused_ordering(307) 00:26:06.315 fused_ordering(308) 00:26:06.315 fused_ordering(309) 00:26:06.315 fused_ordering(310) 00:26:06.315 fused_ordering(311) 00:26:06.315 fused_ordering(312) 00:26:06.315 fused_ordering(313) 00:26:06.315 fused_ordering(314) 00:26:06.315 fused_ordering(315) 00:26:06.315 fused_ordering(316) 00:26:06.315 fused_ordering(317) 00:26:06.315 fused_ordering(318) 00:26:06.315 fused_ordering(319) 00:26:06.315 fused_ordering(320) 00:26:06.315 fused_ordering(321) 00:26:06.315 fused_ordering(322) 00:26:06.315 fused_ordering(323) 00:26:06.315 fused_ordering(324) 00:26:06.315 fused_ordering(325) 00:26:06.315 fused_ordering(326) 00:26:06.315 fused_ordering(327) 00:26:06.315 fused_ordering(328) 00:26:06.315 fused_ordering(329) 00:26:06.315 fused_ordering(330) 00:26:06.315 fused_ordering(331) 00:26:06.315 fused_ordering(332) 00:26:06.315 fused_ordering(333) 00:26:06.315 fused_ordering(334) 00:26:06.315 fused_ordering(335) 00:26:06.315 fused_ordering(336) 00:26:06.315 fused_ordering(337) 00:26:06.315 fused_ordering(338) 00:26:06.315 fused_ordering(339) 00:26:06.315 fused_ordering(340) 00:26:06.315 fused_ordering(341) 00:26:06.315 fused_ordering(342) 00:26:06.315 fused_ordering(343) 00:26:06.315 fused_ordering(344) 00:26:06.315 fused_ordering(345) 00:26:06.315 fused_ordering(346) 00:26:06.315 fused_ordering(347) 00:26:06.315 fused_ordering(348) 00:26:06.315 fused_ordering(349) 00:26:06.315 fused_ordering(350) 00:26:06.315 fused_ordering(351) 00:26:06.315 fused_ordering(352) 00:26:06.315 fused_ordering(353) 00:26:06.315 fused_ordering(354) 00:26:06.315 fused_ordering(355) 00:26:06.315 fused_ordering(356) 00:26:06.315 fused_ordering(357) 00:26:06.315 fused_ordering(358) 00:26:06.315 fused_ordering(359) 00:26:06.315 fused_ordering(360) 00:26:06.315 fused_ordering(361) 00:26:06.315 fused_ordering(362) 00:26:06.315 fused_ordering(363) 00:26:06.315 fused_ordering(364) 00:26:06.315 fused_ordering(365) 00:26:06.315 fused_ordering(366) 00:26:06.315 fused_ordering(367) 00:26:06.315 fused_ordering(368) 00:26:06.315 fused_ordering(369) 00:26:06.315 fused_ordering(370) 00:26:06.315 fused_ordering(371) 00:26:06.315 fused_ordering(372) 00:26:06.315 fused_ordering(373) 00:26:06.315 fused_ordering(374) 00:26:06.315 fused_ordering(375) 00:26:06.315 fused_ordering(376) 00:26:06.315 fused_ordering(377) 00:26:06.315 fused_ordering(378) 00:26:06.315 fused_ordering(379) 00:26:06.315 fused_ordering(380) 00:26:06.315 fused_ordering(381) 00:26:06.315 fused_ordering(382) 00:26:06.315 fused_ordering(383) 00:26:06.315 fused_ordering(384) 00:26:06.315 fused_ordering(385) 00:26:06.315 fused_ordering(386) 00:26:06.315 fused_ordering(387) 00:26:06.315 fused_ordering(388) 00:26:06.315 fused_ordering(389) 00:26:06.315 fused_ordering(390) 00:26:06.315 fused_ordering(391) 00:26:06.315 fused_ordering(392) 00:26:06.315 fused_ordering(393) 00:26:06.315 fused_ordering(394) 00:26:06.315 fused_ordering(395) 00:26:06.315 fused_ordering(396) 00:26:06.315 fused_ordering(397) 00:26:06.315 fused_ordering(398) 00:26:06.315 fused_ordering(399) 00:26:06.315 fused_ordering(400) 00:26:06.315 fused_ordering(401) 00:26:06.315 fused_ordering(402) 00:26:06.315 fused_ordering(403) 00:26:06.315 fused_ordering(404) 00:26:06.315 fused_ordering(405) 00:26:06.315 fused_ordering(406) 00:26:06.315 fused_ordering(407) 00:26:06.315 fused_ordering(408) 00:26:06.315 fused_ordering(409) 00:26:06.315 fused_ordering(410) 00:26:06.882 fused_ordering(411) 00:26:06.882 fused_ordering(412) 00:26:06.882 fused_ordering(413) 00:26:06.882 fused_ordering(414) 00:26:06.882 fused_ordering(415) 00:26:06.882 fused_ordering(416) 00:26:06.882 fused_ordering(417) 00:26:06.882 fused_ordering(418) 00:26:06.882 fused_ordering(419) 00:26:06.882 fused_ordering(420) 00:26:06.882 fused_ordering(421) 00:26:06.882 fused_ordering(422) 00:26:06.882 fused_ordering(423) 00:26:06.882 fused_ordering(424) 00:26:06.882 fused_ordering(425) 00:26:06.882 fused_ordering(426) 00:26:06.882 fused_ordering(427) 00:26:06.882 fused_ordering(428) 00:26:06.882 fused_ordering(429) 00:26:06.882 fused_ordering(430) 00:26:06.882 fused_ordering(431) 00:26:06.882 fused_ordering(432) 00:26:06.882 fused_ordering(433) 00:26:06.882 fused_ordering(434) 00:26:06.882 fused_ordering(435) 00:26:06.882 fused_ordering(436) 00:26:06.882 fused_ordering(437) 00:26:06.882 fused_ordering(438) 00:26:06.882 fused_ordering(439) 00:26:06.882 fused_ordering(440) 00:26:06.882 fused_ordering(441) 00:26:06.882 fused_ordering(442) 00:26:06.882 fused_ordering(443) 00:26:06.882 fused_ordering(444) 00:26:06.882 fused_ordering(445) 00:26:06.882 fused_ordering(446) 00:26:06.882 fused_ordering(447) 00:26:06.882 fused_ordering(448) 00:26:06.882 fused_ordering(449) 00:26:06.882 fused_ordering(450) 00:26:06.882 fused_ordering(451) 00:26:06.882 fused_ordering(452) 00:26:06.882 fused_ordering(453) 00:26:06.882 fused_ordering(454) 00:26:06.882 fused_ordering(455) 00:26:06.882 fused_ordering(456) 00:26:06.882 fused_ordering(457) 00:26:06.882 fused_ordering(458) 00:26:06.882 fused_ordering(459) 00:26:06.882 fused_ordering(460) 00:26:06.882 fused_ordering(461) 00:26:06.882 fused_ordering(462) 00:26:06.882 fused_ordering(463) 00:26:06.882 fused_ordering(464) 00:26:06.882 fused_ordering(465) 00:26:06.882 fused_ordering(466) 00:26:06.882 fused_ordering(467) 00:26:06.882 fused_ordering(468) 00:26:06.882 fused_ordering(469) 00:26:06.882 fused_ordering(470) 00:26:06.882 fused_ordering(471) 00:26:06.882 fused_ordering(472) 00:26:06.882 fused_ordering(473) 00:26:06.882 fused_ordering(474) 00:26:06.882 fused_ordering(475) 00:26:06.882 fused_ordering(476) 00:26:06.882 fused_ordering(477) 00:26:06.882 fused_ordering(478) 00:26:06.882 fused_ordering(479) 00:26:06.882 fused_ordering(480) 00:26:06.882 fused_ordering(481) 00:26:06.882 fused_ordering(482) 00:26:06.882 fused_ordering(483) 00:26:06.882 fused_ordering(484) 00:26:06.882 fused_ordering(485) 00:26:06.882 fused_ordering(486) 00:26:06.882 fused_ordering(487) 00:26:06.882 fused_ordering(488) 00:26:06.882 fused_ordering(489) 00:26:06.882 fused_ordering(490) 00:26:06.882 fused_ordering(491) 00:26:06.882 fused_ordering(492) 00:26:06.882 fused_ordering(493) 00:26:06.882 fused_ordering(494) 00:26:06.882 fused_ordering(495) 00:26:06.882 fused_ordering(496) 00:26:06.882 fused_ordering(497) 00:26:06.882 fused_ordering(498) 00:26:06.882 fused_ordering(499) 00:26:06.882 fused_ordering(500) 00:26:06.882 fused_ordering(501) 00:26:06.882 fused_ordering(502) 00:26:06.882 fused_ordering(503) 00:26:06.882 fused_ordering(504) 00:26:06.882 fused_ordering(505) 00:26:06.882 fused_ordering(506) 00:26:06.882 fused_ordering(507) 00:26:06.882 fused_ordering(508) 00:26:06.882 fused_ordering(509) 00:26:06.882 fused_ordering(510) 00:26:06.882 fused_ordering(511) 00:26:06.882 fused_ordering(512) 00:26:06.882 fused_ordering(513) 00:26:06.882 fused_ordering(514) 00:26:06.882 fused_ordering(515) 00:26:06.882 fused_ordering(516) 00:26:06.882 fused_ordering(517) 00:26:06.882 fused_ordering(518) 00:26:06.882 fused_ordering(519) 00:26:06.882 fused_ordering(520) 00:26:06.882 fused_ordering(521) 00:26:06.882 fused_ordering(522) 00:26:06.882 fused_ordering(523) 00:26:06.882 fused_ordering(524) 00:26:06.882 fused_ordering(525) 00:26:06.882 fused_ordering(526) 00:26:06.882 fused_ordering(527) 00:26:06.882 fused_ordering(528) 00:26:06.882 fused_ordering(529) 00:26:06.882 fused_ordering(530) 00:26:06.882 fused_ordering(531) 00:26:06.882 fused_ordering(532) 00:26:06.882 fused_ordering(533) 00:26:06.882 fused_ordering(534) 00:26:06.882 fused_ordering(535) 00:26:06.882 fused_ordering(536) 00:26:06.882 fused_ordering(537) 00:26:06.882 fused_ordering(538) 00:26:06.882 fused_ordering(539) 00:26:06.882 fused_ordering(540) 00:26:06.882 fused_ordering(541) 00:26:06.882 fused_ordering(542) 00:26:06.882 fused_ordering(543) 00:26:06.882 fused_ordering(544) 00:26:06.882 fused_ordering(545) 00:26:06.882 fused_ordering(546) 00:26:06.882 fused_ordering(547) 00:26:06.882 fused_ordering(548) 00:26:06.882 fused_ordering(549) 00:26:06.882 fused_ordering(550) 00:26:06.882 fused_ordering(551) 00:26:06.882 fused_ordering(552) 00:26:06.882 fused_ordering(553) 00:26:06.882 fused_ordering(554) 00:26:06.882 fused_ordering(555) 00:26:06.882 fused_ordering(556) 00:26:06.882 fused_ordering(557) 00:26:06.882 fused_ordering(558) 00:26:06.882 fused_ordering(559) 00:26:06.882 fused_ordering(560) 00:26:06.882 fused_ordering(561) 00:26:06.882 fused_ordering(562) 00:26:06.882 fused_ordering(563) 00:26:06.882 fused_ordering(564) 00:26:06.882 fused_ordering(565) 00:26:06.882 fused_ordering(566) 00:26:06.882 fused_ordering(567) 00:26:06.882 fused_ordering(568) 00:26:06.882 fused_ordering(569) 00:26:06.882 fused_ordering(570) 00:26:06.882 fused_ordering(571) 00:26:06.882 fused_ordering(572) 00:26:06.882 fused_ordering(573) 00:26:06.882 fused_ordering(574) 00:26:06.882 fused_ordering(575) 00:26:06.882 fused_ordering(576) 00:26:06.882 fused_ordering(577) 00:26:06.882 fused_ordering(578) 00:26:06.882 fused_ordering(579) 00:26:06.882 fused_ordering(580) 00:26:06.882 fused_ordering(581) 00:26:06.882 fused_ordering(582) 00:26:06.882 fused_ordering(583) 00:26:06.882 fused_ordering(584) 00:26:06.882 fused_ordering(585) 00:26:06.882 fused_ordering(586) 00:26:06.882 fused_ordering(587) 00:26:06.882 fused_ordering(588) 00:26:06.882 fused_ordering(589) 00:26:06.882 fused_ordering(590) 00:26:06.882 fused_ordering(591) 00:26:06.882 fused_ordering(592) 00:26:06.882 fused_ordering(593) 00:26:06.882 fused_ordering(594) 00:26:06.882 fused_ordering(595) 00:26:06.882 fused_ordering(596) 00:26:06.882 fused_ordering(597) 00:26:06.882 fused_ordering(598) 00:26:06.882 fused_ordering(599) 00:26:06.882 fused_ordering(600) 00:26:06.882 fused_ordering(601) 00:26:06.882 fused_ordering(602) 00:26:06.882 fused_ordering(603) 00:26:06.882 fused_ordering(604) 00:26:06.882 fused_ordering(605) 00:26:06.882 fused_ordering(606) 00:26:06.882 fused_ordering(607) 00:26:06.882 fused_ordering(608) 00:26:06.882 fused_ordering(609) 00:26:06.882 fused_ordering(610) 00:26:06.882 fused_ordering(611) 00:26:06.882 fused_ordering(612) 00:26:06.882 fused_ordering(613) 00:26:06.882 fused_ordering(614) 00:26:06.882 fused_ordering(615) 00:26:07.448 fused_ordering(616) 00:26:07.448 fused_ordering(617) 00:26:07.448 fused_ordering(618) 00:26:07.448 fused_ordering(619) 00:26:07.448 fused_ordering(620) 00:26:07.448 fused_ordering(621) 00:26:07.448 fused_ordering(622) 00:26:07.448 fused_ordering(623) 00:26:07.448 fused_ordering(624) 00:26:07.448 fused_ordering(625) 00:26:07.448 fused_ordering(626) 00:26:07.448 fused_ordering(627) 00:26:07.448 fused_ordering(628) 00:26:07.448 fused_ordering(629) 00:26:07.448 fused_ordering(630) 00:26:07.448 fused_ordering(631) 00:26:07.448 fused_ordering(632) 00:26:07.448 fused_ordering(633) 00:26:07.448 fused_ordering(634) 00:26:07.448 fused_ordering(635) 00:26:07.448 fused_ordering(636) 00:26:07.448 fused_ordering(637) 00:26:07.448 fused_ordering(638) 00:26:07.448 fused_ordering(639) 00:26:07.448 fused_ordering(640) 00:26:07.448 fused_ordering(641) 00:26:07.448 fused_ordering(642) 00:26:07.448 fused_ordering(643) 00:26:07.448 fused_ordering(644) 00:26:07.448 fused_ordering(645) 00:26:07.448 fused_ordering(646) 00:26:07.448 fused_ordering(647) 00:26:07.448 fused_ordering(648) 00:26:07.448 fused_ordering(649) 00:26:07.448 fused_ordering(650) 00:26:07.448 fused_ordering(651) 00:26:07.448 fused_ordering(652) 00:26:07.448 fused_ordering(653) 00:26:07.448 fused_ordering(654) 00:26:07.448 fused_ordering(655) 00:26:07.448 fused_ordering(656) 00:26:07.448 fused_ordering(657) 00:26:07.448 fused_ordering(658) 00:26:07.448 fused_ordering(659) 00:26:07.448 fused_ordering(660) 00:26:07.448 fused_ordering(661) 00:26:07.448 fused_ordering(662) 00:26:07.448 fused_ordering(663) 00:26:07.448 fused_ordering(664) 00:26:07.448 fused_ordering(665) 00:26:07.448 fused_ordering(666) 00:26:07.448 fused_ordering(667) 00:26:07.448 fused_ordering(668) 00:26:07.448 fused_ordering(669) 00:26:07.448 fused_ordering(670) 00:26:07.448 fused_ordering(671) 00:26:07.448 fused_ordering(672) 00:26:07.448 fused_ordering(673) 00:26:07.448 fused_ordering(674) 00:26:07.448 fused_ordering(675) 00:26:07.448 fused_ordering(676) 00:26:07.448 fused_ordering(677) 00:26:07.448 fused_ordering(678) 00:26:07.448 fused_ordering(679) 00:26:07.448 fused_ordering(680) 00:26:07.448 fused_ordering(681) 00:26:07.448 fused_ordering(682) 00:26:07.448 fused_ordering(683) 00:26:07.449 fused_ordering(684) 00:26:07.449 fused_ordering(685) 00:26:07.449 fused_ordering(686) 00:26:07.449 fused_ordering(687) 00:26:07.449 fused_ordering(688) 00:26:07.449 fused_ordering(689) 00:26:07.449 fused_ordering(690) 00:26:07.449 fused_ordering(691) 00:26:07.449 fused_ordering(692) 00:26:07.449 fused_ordering(693) 00:26:07.449 fused_ordering(694) 00:26:07.449 fused_ordering(695) 00:26:07.449 fused_ordering(696) 00:26:07.449 fused_ordering(697) 00:26:07.449 fused_ordering(698) 00:26:07.449 fused_ordering(699) 00:26:07.449 fused_ordering(700) 00:26:07.449 fused_ordering(701) 00:26:07.449 fused_ordering(702) 00:26:07.449 fused_ordering(703) 00:26:07.449 fused_ordering(704) 00:26:07.449 fused_ordering(705) 00:26:07.449 fused_ordering(706) 00:26:07.449 fused_ordering(707) 00:26:07.449 fused_ordering(708) 00:26:07.449 fused_ordering(709) 00:26:07.449 fused_ordering(710) 00:26:07.449 fused_ordering(711) 00:26:07.449 fused_ordering(712) 00:26:07.449 fused_ordering(713) 00:26:07.449 fused_ordering(714) 00:26:07.449 fused_ordering(715) 00:26:07.449 fused_ordering(716) 00:26:07.449 fused_ordering(717) 00:26:07.449 fused_ordering(718) 00:26:07.449 fused_ordering(719) 00:26:07.449 fused_ordering(720) 00:26:07.449 fused_ordering(721) 00:26:07.449 fused_ordering(722) 00:26:07.449 fused_ordering(723) 00:26:07.449 fused_ordering(724) 00:26:07.449 fused_ordering(725) 00:26:07.449 fused_ordering(726) 00:26:07.449 fused_ordering(727) 00:26:07.449 fused_ordering(728) 00:26:07.449 fused_ordering(729) 00:26:07.449 fused_ordering(730) 00:26:07.449 fused_ordering(731) 00:26:07.449 fused_ordering(732) 00:26:07.449 fused_ordering(733) 00:26:07.449 fused_ordering(734) 00:26:07.449 fused_ordering(735) 00:26:07.449 fused_ordering(736) 00:26:07.449 fused_ordering(737) 00:26:07.449 fused_ordering(738) 00:26:07.449 fused_ordering(739) 00:26:07.449 fused_ordering(740) 00:26:07.449 fused_ordering(741) 00:26:07.449 fused_ordering(742) 00:26:07.449 fused_ordering(743) 00:26:07.449 fused_ordering(744) 00:26:07.449 fused_ordering(745) 00:26:07.449 fused_ordering(746) 00:26:07.449 fused_ordering(747) 00:26:07.449 fused_ordering(748) 00:26:07.449 fused_ordering(749) 00:26:07.449 fused_ordering(750) 00:26:07.449 fused_ordering(751) 00:26:07.449 fused_ordering(752) 00:26:07.449 fused_ordering(753) 00:26:07.449 fused_ordering(754) 00:26:07.449 fused_ordering(755) 00:26:07.449 fused_ordering(756) 00:26:07.449 fused_ordering(757) 00:26:07.449 fused_ordering(758) 00:26:07.449 fused_ordering(759) 00:26:07.449 fused_ordering(760) 00:26:07.449 fused_ordering(761) 00:26:07.449 fused_ordering(762) 00:26:07.449 fused_ordering(763) 00:26:07.449 fused_ordering(764) 00:26:07.449 fused_ordering(765) 00:26:07.449 fused_ordering(766) 00:26:07.449 fused_ordering(767) 00:26:07.449 fused_ordering(768) 00:26:07.449 fused_ordering(769) 00:26:07.449 fused_ordering(770) 00:26:07.449 fused_ordering(771) 00:26:07.449 fused_ordering(772) 00:26:07.449 fused_ordering(773) 00:26:07.449 fused_ordering(774) 00:26:07.449 fused_ordering(775) 00:26:07.449 fused_ordering(776) 00:26:07.449 fused_ordering(777) 00:26:07.449 fused_ordering(778) 00:26:07.449 fused_ordering(779) 00:26:07.449 fused_ordering(780) 00:26:07.449 fused_ordering(781) 00:26:07.449 fused_ordering(782) 00:26:07.449 fused_ordering(783) 00:26:07.449 fused_ordering(784) 00:26:07.449 fused_ordering(785) 00:26:07.449 fused_ordering(786) 00:26:07.449 fused_ordering(787) 00:26:07.449 fused_ordering(788) 00:26:07.449 fused_ordering(789) 00:26:07.449 fused_ordering(790) 00:26:07.449 fused_ordering(791) 00:26:07.449 fused_ordering(792) 00:26:07.449 fused_ordering(793) 00:26:07.449 fused_ordering(794) 00:26:07.449 fused_ordering(795) 00:26:07.449 fused_ordering(796) 00:26:07.449 fused_ordering(797) 00:26:07.449 fused_ordering(798) 00:26:07.449 fused_ordering(799) 00:26:07.449 fused_ordering(800) 00:26:07.449 fused_ordering(801) 00:26:07.449 fused_ordering(802) 00:26:07.449 fused_ordering(803) 00:26:07.449 fused_ordering(804) 00:26:07.449 fused_ordering(805) 00:26:07.449 fused_ordering(806) 00:26:07.449 fused_ordering(807) 00:26:07.449 fused_ordering(808) 00:26:07.449 fused_ordering(809) 00:26:07.449 fused_ordering(810) 00:26:07.449 fused_ordering(811) 00:26:07.449 fused_ordering(812) 00:26:07.449 fused_ordering(813) 00:26:07.449 fused_ordering(814) 00:26:07.449 fused_ordering(815) 00:26:07.449 fused_ordering(816) 00:26:07.449 fused_ordering(817) 00:26:07.449 fused_ordering(818) 00:26:07.449 fused_ordering(819) 00:26:07.449 fused_ordering(820) 00:26:08.384 fused_ordering(821) 00:26:08.384 fused_ordering(822) 00:26:08.384 fused_ordering(823) 00:26:08.384 fused_ordering(824) 00:26:08.384 fused_ordering(825) 00:26:08.384 fused_ordering(826) 00:26:08.384 fused_ordering(827) 00:26:08.384 fused_ordering(828) 00:26:08.384 fused_ordering(829) 00:26:08.384 fused_ordering(830) 00:26:08.384 fused_ordering(831) 00:26:08.384 fused_ordering(832) 00:26:08.384 fused_ordering(833) 00:26:08.384 fused_ordering(834) 00:26:08.384 fused_ordering(835) 00:26:08.384 fused_ordering(836) 00:26:08.384 fused_ordering(837) 00:26:08.384 fused_ordering(838) 00:26:08.384 fused_ordering(839) 00:26:08.384 fused_ordering(840) 00:26:08.384 fused_ordering(841) 00:26:08.384 fused_ordering(842) 00:26:08.384 fused_ordering(843) 00:26:08.384 fused_ordering(844) 00:26:08.384 fused_ordering(845) 00:26:08.384 fused_ordering(846) 00:26:08.384 fused_ordering(847) 00:26:08.384 fused_ordering(848) 00:26:08.384 fused_ordering(849) 00:26:08.384 fused_ordering(850) 00:26:08.384 fused_ordering(851) 00:26:08.384 fused_ordering(852) 00:26:08.384 fused_ordering(853) 00:26:08.384 fused_ordering(854) 00:26:08.384 fused_ordering(855) 00:26:08.384 fused_ordering(856) 00:26:08.384 fused_ordering(857) 00:26:08.384 fused_ordering(858) 00:26:08.384 fused_ordering(859) 00:26:08.384 fused_ordering(860) 00:26:08.384 fused_ordering(861) 00:26:08.384 fused_ordering(862) 00:26:08.384 fused_ordering(863) 00:26:08.384 fused_ordering(864) 00:26:08.384 fused_ordering(865) 00:26:08.384 fused_ordering(866) 00:26:08.384 fused_ordering(867) 00:26:08.384 fused_ordering(868) 00:26:08.384 fused_ordering(869) 00:26:08.384 fused_ordering(870) 00:26:08.384 fused_ordering(871) 00:26:08.384 fused_ordering(872) 00:26:08.384 fused_ordering(873) 00:26:08.384 fused_ordering(874) 00:26:08.384 fused_ordering(875) 00:26:08.384 fused_ordering(876) 00:26:08.384 fused_ordering(877) 00:26:08.384 fused_ordering(878) 00:26:08.384 fused_ordering(879) 00:26:08.384 fused_ordering(880) 00:26:08.384 fused_ordering(881) 00:26:08.384 fused_ordering(882) 00:26:08.384 fused_ordering(883) 00:26:08.384 fused_ordering(884) 00:26:08.384 fused_ordering(885) 00:26:08.384 fused_ordering(886) 00:26:08.384 fused_ordering(887) 00:26:08.384 fused_ordering(888) 00:26:08.384 fused_ordering(889) 00:26:08.384 fused_ordering(890) 00:26:08.384 fused_ordering(891) 00:26:08.384 fused_ordering(892) 00:26:08.384 fused_ordering(893) 00:26:08.384 fused_ordering(894) 00:26:08.384 fused_ordering(895) 00:26:08.384 fused_ordering(896) 00:26:08.384 fused_ordering(897) 00:26:08.384 fused_ordering(898) 00:26:08.384 fused_ordering(899) 00:26:08.384 fused_ordering(900) 00:26:08.384 fused_ordering(901) 00:26:08.384 fused_ordering(902) 00:26:08.384 fused_ordering(903) 00:26:08.384 fused_ordering(904) 00:26:08.384 fused_ordering(905) 00:26:08.384 fused_ordering(906) 00:26:08.384 fused_ordering(907) 00:26:08.384 fused_ordering(908) 00:26:08.384 fused_ordering(909) 00:26:08.384 fused_ordering(910) 00:26:08.384 fused_ordering(911) 00:26:08.384 fused_ordering(912) 00:26:08.384 fused_ordering(913) 00:26:08.384 fused_ordering(914) 00:26:08.384 fused_ordering(915) 00:26:08.384 fused_ordering(916) 00:26:08.384 fused_ordering(917) 00:26:08.384 fused_ordering(918) 00:26:08.384 fused_ordering(919) 00:26:08.384 fused_ordering(920) 00:26:08.384 fused_ordering(921) 00:26:08.384 fused_ordering(922) 00:26:08.384 fused_ordering(923) 00:26:08.384 fused_ordering(924) 00:26:08.384 fused_ordering(925) 00:26:08.384 fused_ordering(926) 00:26:08.384 fused_ordering(927) 00:26:08.384 fused_ordering(928) 00:26:08.384 fused_ordering(929) 00:26:08.384 fused_ordering(930) 00:26:08.384 fused_ordering(931) 00:26:08.384 fused_ordering(932) 00:26:08.384 fused_ordering(933) 00:26:08.384 fused_ordering(934) 00:26:08.384 fused_ordering(935) 00:26:08.384 fused_ordering(936) 00:26:08.384 fused_ordering(937) 00:26:08.384 fused_ordering(938) 00:26:08.384 fused_ordering(939) 00:26:08.384 fused_ordering(940) 00:26:08.384 fused_ordering(941) 00:26:08.384 fused_ordering(942) 00:26:08.384 fused_ordering(943) 00:26:08.384 fused_ordering(944) 00:26:08.384 fused_ordering(945) 00:26:08.384 fused_ordering(946) 00:26:08.384 fused_ordering(947) 00:26:08.384 fused_ordering(948) 00:26:08.384 fused_ordering(949) 00:26:08.384 fused_ordering(950) 00:26:08.384 fused_ordering(951) 00:26:08.384 fused_ordering(952) 00:26:08.384 fused_ordering(953) 00:26:08.384 fused_ordering(954) 00:26:08.384 fused_ordering(955) 00:26:08.384 fused_ordering(956) 00:26:08.384 fused_ordering(957) 00:26:08.384 fused_ordering(958) 00:26:08.384 fused_ordering(959) 00:26:08.384 fused_ordering(960) 00:26:08.384 fused_ordering(961) 00:26:08.384 fused_ordering(962) 00:26:08.384 fused_ordering(963) 00:26:08.384 fused_ordering(964) 00:26:08.384 fused_ordering(965) 00:26:08.384 fused_ordering(966) 00:26:08.384 fused_ordering(967) 00:26:08.384 fused_ordering(968) 00:26:08.384 fused_ordering(969) 00:26:08.384 fused_ordering(970) 00:26:08.384 fused_ordering(971) 00:26:08.384 fused_ordering(972) 00:26:08.384 fused_ordering(973) 00:26:08.384 fused_ordering(974) 00:26:08.384 fused_ordering(975) 00:26:08.384 fused_ordering(976) 00:26:08.384 fused_ordering(977) 00:26:08.384 fused_ordering(978) 00:26:08.384 fused_ordering(979) 00:26:08.384 fused_ordering(980) 00:26:08.384 fused_ordering(981) 00:26:08.384 fused_ordering(982) 00:26:08.384 fused_ordering(983) 00:26:08.384 fused_ordering(984) 00:26:08.384 fused_ordering(985) 00:26:08.384 fused_ordering(986) 00:26:08.384 fused_ordering(987) 00:26:08.384 fused_ordering(988) 00:26:08.384 fused_ordering(989) 00:26:08.384 fused_ordering(990) 00:26:08.384 fused_ordering(991) 00:26:08.384 fused_ordering(992) 00:26:08.384 fused_ordering(993) 00:26:08.384 fused_ordering(994) 00:26:08.384 fused_ordering(995) 00:26:08.384 fused_ordering(996) 00:26:08.384 fused_ordering(997) 00:26:08.384 fused_ordering(998) 00:26:08.384 fused_ordering(999) 00:26:08.384 fused_ordering(1000) 00:26:08.384 fused_ordering(1001) 00:26:08.384 fused_ordering(1002) 00:26:08.384 fused_ordering(1003) 00:26:08.384 fused_ordering(1004) 00:26:08.384 fused_ordering(1005) 00:26:08.384 fused_ordering(1006) 00:26:08.384 fused_ordering(1007) 00:26:08.384 fused_ordering(1008) 00:26:08.384 fused_ordering(1009) 00:26:08.384 fused_ordering(1010) 00:26:08.384 fused_ordering(1011) 00:26:08.384 fused_ordering(1012) 00:26:08.384 fused_ordering(1013) 00:26:08.384 fused_ordering(1014) 00:26:08.384 fused_ordering(1015) 00:26:08.384 fused_ordering(1016) 00:26:08.384 fused_ordering(1017) 00:26:08.384 fused_ordering(1018) 00:26:08.384 fused_ordering(1019) 00:26:08.384 fused_ordering(1020) 00:26:08.384 fused_ordering(1021) 00:26:08.384 fused_ordering(1022) 00:26:08.384 fused_ordering(1023) 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.384 rmmod nvme_tcp 00:26:08.384 rmmod nvme_fabrics 00:26:08.384 rmmod nvme_keyring 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2744478 ']' 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2744478 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2744478 ']' 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2744478 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2744478 00:26:08.384 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:08.385 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:08.385 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2744478' 00:26:08.385 killing process with pid 2744478 00:26:08.385 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2744478 00:26:08.385 16:39:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2744478 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.643 16:39:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.543 16:39:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:10.543 00:26:10.543 real 0m8.382s 00:26:10.543 user 0m5.479s 00:26:10.543 sys 0m4.184s 00:26:10.543 16:39:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:10.543 16:39:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:10.543 ************************************ 00:26:10.543 END TEST nvmf_fused_ordering 00:26:10.543 ************************************ 00:26:10.801 16:39:30 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:26:10.801 16:39:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:10.801 16:39:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:10.801 16:39:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.801 ************************************ 00:26:10.801 START TEST nvmf_delete_subsystem 00:26:10.801 ************************************ 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:26:10.801 * Looking for test storage... 00:26:10.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.801 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.802 16:39:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:13.331 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.331 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:13.332 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:13.332 Found net devices under 0000:82:00.0: cvl_0_0 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:13.332 Found net devices under 0000:82:00.1: cvl_0_1 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:26:13.332 00:26:13.332 --- 10.0.0.2 ping statistics --- 00:26:13.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.332 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:26:13.332 00:26:13.332 --- 10.0.0.1 ping statistics --- 00:26:13.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.332 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2747122 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2747122 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2747122 ']' 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:13.332 16:39:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.332 [2024-07-22 16:39:32.850215] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:13.332 [2024-07-22 16:39:32.850287] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.332 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.332 [2024-07-22 16:39:32.929506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.591 [2024-07-22 16:39:33.020684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.591 [2024-07-22 16:39:33.020740] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.591 [2024-07-22 16:39:33.020765] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.591 [2024-07-22 16:39:33.020779] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.591 [2024-07-22 16:39:33.020790] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.591 [2024-07-22 16:39:33.020892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.591 [2024-07-22 16:39:33.020898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 [2024-07-22 16:39:33.168894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 [2024-07-22 16:39:33.185127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 NULL1 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 Delay0 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2747148 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:13.591 16:39:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:13.849 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.849 [2024-07-22 16:39:33.259779] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:15.746 16:39:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.746 16:39:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.746 16:39:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Write completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 starting I/O failed: -6 00:26:15.746 Write completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 starting I/O failed: -6 00:26:15.746 Write completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 starting I/O failed: -6 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.746 Write completed with error (sct=0, sc=8) 00:26:15.746 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 [2024-07-22 16:39:35.350006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff9180 is same with the state(5) to be set 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 starting I/O failed: -6 00:26:15.747 [2024-07-22 16:39:35.350779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8c4000c00 is same with the state(5) to be set 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Write completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.747 Read completed with error (sct=0, sc=8) 00:26:15.748 Read completed with error (sct=0, sc=8) 00:26:15.748 Read completed with error (sct=0, sc=8) 00:26:15.748 Write completed with error (sct=0, sc=8) 00:26:15.748 Read completed with error (sct=0, sc=8) 00:26:15.748 Write completed with error (sct=0, sc=8) 00:26:15.748 Read completed with error (sct=0, sc=8) 00:26:16.681 [2024-07-22 16:39:36.321048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffc8b0 is same with the state(5) to be set 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Write completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Write completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Write completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.938 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 [2024-07-22 16:39:36.350320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff9360 is same with the state(5) to be set 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 [2024-07-22 16:39:36.352472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8c400c600 is same with the state(5) to be set 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 [2024-07-22 16:39:36.352638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff9aa0 is same with the state(5) to be set 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Write completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 Read completed with error (sct=0, sc=8) 00:26:16.939 [2024-07-22 16:39:36.352862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8c400bfe0 is same with the state(5) to be set 00:26:16.939 Initializing NVMe Controllers 00:26:16.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:16.939 Controller IO queue size 128, less than required. 00:26:16.939 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:16.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:16.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:16.939 Initialization complete. Launching workers. 00:26:16.939 ======================================================== 00:26:16.939 Latency(us) 00:26:16.939 Device Information : IOPS MiB/s Average min max 00:26:16.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.77 0.08 900576.31 667.53 2000984.75 00:26:16.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.81 0.08 960760.09 384.16 2003651.04 00:26:16.939 ======================================================== 00:26:16.939 Total : 336.58 0.16 930224.37 384.16 2003651.04 00:26:16.939 00:26:16.939 [2024-07-22 16:39:36.353827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffc8b0 (9): Bad file descriptor 00:26:16.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:16.939 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.939 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:16.939 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2747148 00:26:16.939 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:17.504 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:17.504 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2747148 00:26:17.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2747148) - No such process 00:26:17.504 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2747148 00:26:17.504 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:26:17.504 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2747148 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2747148 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.505 [2024-07-22 16:39:36.878655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2747670 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:17.505 16:39:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:17.505 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.505 [2024-07-22 16:39:36.931794] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:17.761 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:17.761 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:17.761 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:18.326 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:18.326 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:18.326 16:39:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:18.890 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:18.890 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:18.890 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:19.454 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:19.454 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:19.454 16:39:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:20.019 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:20.019 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:20.019 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:20.277 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:20.277 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:20.277 16:39:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:20.534 Initializing NVMe Controllers 00:26:20.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.534 Controller IO queue size 128, less than required. 00:26:20.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:20.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:20.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:20.535 Initialization complete. Launching workers. 00:26:20.535 ======================================================== 00:26:20.535 Latency(us) 00:26:20.535 Device Information : IOPS MiB/s Average min max 00:26:20.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004587.72 1000262.83 1012652.48 00:26:20.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004454.39 1000194.85 1012248.64 00:26:20.535 ======================================================== 00:26:20.535 Total : 256.00 0.12 1004521.06 1000194.85 1012652.48 00:26:20.535 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2747670 00:26:20.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2747670) - No such process 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2747670 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:20.793 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:20.793 rmmod nvme_tcp 00:26:20.793 rmmod nvme_fabrics 00:26:21.051 rmmod nvme_keyring 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2747122 ']' 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2747122 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2747122 ']' 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2747122 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2747122 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2747122' 00:26:21.051 killing process with pid 2747122 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2747122 00:26:21.051 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2747122 00:26:21.309 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.309 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.310 16:39:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.222 16:39:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.222 00:26:23.222 real 0m12.535s 00:26:23.222 user 0m27.510s 00:26:23.222 sys 0m3.222s 00:26:23.222 16:39:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:23.222 16:39:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:23.222 ************************************ 00:26:23.222 END TEST nvmf_delete_subsystem 00:26:23.222 ************************************ 00:26:23.222 16:39:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:26:23.222 16:39:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:23.222 16:39:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:23.222 16:39:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.222 ************************************ 00:26:23.222 START TEST nvmf_ns_masking 00:26:23.222 ************************************ 00:26:23.222 16:39:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:26:23.222 * Looking for test storage... 00:26:23.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.222 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.222 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=9dc999fd-da52-4dc8-b952-07ca1436bb1d 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:26:23.481 16:39:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:26.012 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:26.012 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:26.012 Found net devices under 0000:82:00.0: cvl_0_0 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:26.012 Found net devices under 0000:82:00.1: cvl_0_1 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.012 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:26:26.013 00:26:26.013 --- 10.0.0.2 ping statistics --- 00:26:26.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.013 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:26:26.013 00:26:26.013 --- 10.0.0.1 ping statistics --- 00:26:26.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.013 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2750297 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2750297 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2750297 ']' 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:26.013 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:26.013 [2024-07-22 16:39:45.555411] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:26.013 [2024-07-22 16:39:45.555480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.013 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.013 [2024-07-22 16:39:45.628509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.271 [2024-07-22 16:39:45.717227] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.271 [2024-07-22 16:39:45.717305] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.271 [2024-07-22 16:39:45.717329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.271 [2024-07-22 16:39:45.717340] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.271 [2024-07-22 16:39:45.717350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.271 [2024-07-22 16:39:45.717428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.271 [2024-07-22 16:39:45.717494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.271 [2024-07-22 16:39:45.717536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.271 [2024-07-22 16:39:45.717538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.271 16:39:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:26.528 [2024-07-22 16:39:46.104615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.528 16:39:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:26:26.528 16:39:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:26:26.529 16:39:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:26.786 Malloc1 00:26:27.044 16:39:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:27.302 Malloc2 00:26:27.302 16:39:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:27.559 16:39:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:26:27.817 16:39:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.074 [2024-07-22 16:39:47.473360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9dc999fd-da52-4dc8-b952-07ca1436bb1d -a 10.0.0.2 -s 4420 -i 4 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:28.074 16:39:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:30.596 [ 0]:0x1 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a24b0480af724785b2bdf670d587cfdb 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a24b0480af724785b2bdf670d587cfdb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:30.596 16:39:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:30.596 [ 0]:0x1 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a24b0480af724785b2bdf670d587cfdb 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a24b0480af724785b2bdf670d587cfdb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:30.596 [ 1]:0x2 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:26:30.596 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:30.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:30.852 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.853 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:26:31.109 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:26:31.109 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9dc999fd-da52-4dc8-b952-07ca1436bb1d -a 10.0.0.2 -s 4420 -i 4 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:26:31.366 16:39:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:33.259 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:33.517 16:39:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:33.517 [ 0]:0x2 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:33.517 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:33.775 [ 0]:0x1 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a24b0480af724785b2bdf670d587cfdb 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a24b0480af724785b2bdf670d587cfdb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:33.775 [ 1]:0x2 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:33.775 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:34.033 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:34.290 [ 0]:0x2 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:34.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:34.290 16:39:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:34.548 16:39:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:26:34.548 16:39:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9dc999fd-da52-4dc8-b952-07ca1436bb1d -a 10.0.0.2 -s 4420 -i 4 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:26:34.805 16:39:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:36.703 [ 0]:0x1 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:36.703 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a24b0480af724785b2bdf670d587cfdb 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a24b0480af724785b2bdf670d587cfdb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:36.961 [ 1]:0x2 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:36.961 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:37.219 [ 0]:0x2 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:26:37.219 16:39:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:37.477 [2024-07-22 16:39:57.109195] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:26:37.477 request: 00:26:37.477 { 00:26:37.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.477 "nsid": 2, 00:26:37.477 "host": "nqn.2016-06.io.spdk:host1", 00:26:37.477 "method": "nvmf_ns_remove_host", 00:26:37.477 "req_id": 1 00:26:37.477 } 00:26:37.477 Got JSON-RPC error response 00:26:37.477 response: 00:26:37.477 { 00:26:37.477 "code": -32602, 00:26:37.477 "message": "Invalid parameters" 00:26:37.477 } 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:37.734 [ 0]:0x2 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826e3d6514854ad08fe831a75c215e96 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826e3d6514854ad08fe831a75c215e96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:37.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:37.734 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.992 rmmod nvme_tcp 00:26:37.992 rmmod nvme_fabrics 00:26:37.992 rmmod nvme_keyring 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2750297 ']' 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2750297 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2750297 ']' 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2750297 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:37.992 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2750297 00:26:37.993 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:37.993 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:37.993 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2750297' 00:26:37.993 killing process with pid 2750297 00:26:37.993 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2750297 00:26:37.993 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2750297 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.560 16:39:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.459 16:39:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.459 00:26:40.459 real 0m17.129s 00:26:40.459 user 0m52.220s 00:26:40.459 sys 0m4.108s 00:26:40.459 16:39:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:40.459 16:39:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:40.459 ************************************ 00:26:40.459 END TEST nvmf_ns_masking 00:26:40.459 ************************************ 00:26:40.459 16:39:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:26:40.459 16:39:59 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:40.459 16:39:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:40.459 16:39:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:40.459 16:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.459 ************************************ 00:26:40.459 START TEST nvmf_nvme_cli 00:26:40.459 ************************************ 00:26:40.459 16:39:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:40.459 * Looking for test storage... 00:26:40.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.459 16:40:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.460 16:40:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:42.990 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:42.990 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:42.990 Found net devices under 0000:82:00.0: cvl_0_0 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:42.990 Found net devices under 0000:82:00.1: cvl_0_1 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:42.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:26:42.990 00:26:42.990 --- 10.0.0.2 ping statistics --- 00:26:42.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.990 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:42.990 00:26:42.990 --- 10.0.0.1 ping statistics --- 00:26:42.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.990 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2754136 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2754136 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2754136 ']' 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.990 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:42.991 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.991 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:42.991 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:42.991 [2024-07-22 16:40:02.553978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:42.991 [2024-07-22 16:40:02.554059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.991 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.991 [2024-07-22 16:40:02.639268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.250 [2024-07-22 16:40:02.730555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.250 [2024-07-22 16:40:02.730615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.250 [2024-07-22 16:40:02.730637] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.250 [2024-07-22 16:40:02.730651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.250 [2024-07-22 16:40:02.730663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.250 [2024-07-22 16:40:02.730755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.250 [2024-07-22 16:40:02.730807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.250 [2024-07-22 16:40:02.730925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.250 [2024-07-22 16:40:02.730927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.250 [2024-07-22 16:40:02.882819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.250 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.508 Malloc0 00:26:43.508 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 Malloc1 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 [2024-07-22 16:40:02.966441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.509 16:40:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:26:43.509 00:26:43.509 Discovery Log Number of Records 2, Generation counter 2 00:26:43.509 =====Discovery Log Entry 0====== 00:26:43.509 trtype: tcp 00:26:43.509 adrfam: ipv4 00:26:43.509 subtype: current discovery subsystem 00:26:43.509 treq: not required 00:26:43.509 portid: 0 00:26:43.509 trsvcid: 4420 00:26:43.509 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:43.509 traddr: 10.0.0.2 00:26:43.509 eflags: explicit discovery connections, duplicate discovery information 00:26:43.509 sectype: none 00:26:43.509 =====Discovery Log Entry 1====== 00:26:43.509 trtype: tcp 00:26:43.509 adrfam: ipv4 00:26:43.509 subtype: nvme subsystem 00:26:43.509 treq: not required 00:26:43.509 portid: 0 00:26:43.509 trsvcid: 4420 00:26:43.509 subnqn: nqn.2016-06.io.spdk:cnode1 00:26:43.509 traddr: 10.0.0.2 00:26:43.509 eflags: none 00:26:43.509 sectype: none 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:26:43.509 16:40:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:26:44.073 16:40:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:26:46.599 /dev/nvme0n1 ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:26:46.599 16:40:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:46.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.599 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.857 rmmod nvme_tcp 00:26:46.857 rmmod nvme_fabrics 00:26:46.857 rmmod nvme_keyring 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2754136 ']' 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2754136 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2754136 ']' 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2754136 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:46.857 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2754136 00:26:46.858 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:46.858 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:46.858 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2754136' 00:26:46.858 killing process with pid 2754136 00:26:46.858 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2754136 00:26:46.858 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2754136 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.115 16:40:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.015 16:40:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.015 00:26:49.015 real 0m8.650s 00:26:49.015 user 0m15.964s 00:26:49.015 sys 0m2.403s 00:26:49.015 16:40:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:49.015 16:40:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:49.015 ************************************ 00:26:49.015 END TEST nvmf_nvme_cli 00:26:49.015 ************************************ 00:26:49.273 16:40:08 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:26:49.273 16:40:08 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:49.273 16:40:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:49.273 16:40:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:49.273 16:40:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.273 ************************************ 00:26:49.273 START TEST nvmf_vfio_user 00:26:49.273 ************************************ 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:49.273 * Looking for test storage... 00:26:49.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.273 16:40:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2755054 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2755054' 00:26:49.274 Process pid: 2755054 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2755054 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2755054 ']' 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:49.274 16:40:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:26:49.274 [2024-07-22 16:40:08.816224] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:49.274 [2024-07-22 16:40:08.816315] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.274 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.274 [2024-07-22 16:40:08.882201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.532 [2024-07-22 16:40:08.971056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.532 [2024-07-22 16:40:08.971107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.532 [2024-07-22 16:40:08.971135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.532 [2024-07-22 16:40:08.971149] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.532 [2024-07-22 16:40:08.971159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.532 [2024-07-22 16:40:08.971216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.532 [2024-07-22 16:40:08.971275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.532 [2024-07-22 16:40:08.971317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.532 [2024-07-22 16:40:08.971319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.532 16:40:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:49.532 16:40:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:26:49.532 16:40:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:26:50.464 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:51.028 Malloc1 00:26:51.028 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:26:51.285 16:40:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:26:51.542 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:26:51.799 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:51.799 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:26:51.799 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:52.057 Malloc2 00:26:52.057 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:26:52.315 16:40:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:26:52.572 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:26:52.830 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:26:52.830 [2024-07-22 16:40:12.428427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:52.830 [2024-07-22 16:40:12.428472] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755479 ] 00:26:52.830 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.830 [2024-07-22 16:40:12.461371] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:26:52.830 [2024-07-22 16:40:12.472509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:52.831 [2024-07-22 16:40:12.472540] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f670ad1f000 00:26:52.831 [2024-07-22 16:40:12.473504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.474500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.475506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.476511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.477527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.478529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.479544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:52.831 [2024-07-22 16:40:12.480541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:53.090 [2024-07-22 16:40:12.481562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:53.090 [2024-07-22 16:40:12.481584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6709ad5000 00:26:53.090 [2024-07-22 16:40:12.482787] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:53.090 [2024-07-22 16:40:12.498400] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:26:53.090 [2024-07-22 16:40:12.498440] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:26:53.090 [2024-07-22 16:40:12.500665] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:53.090 [2024-07-22 16:40:12.500720] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:26:53.090 [2024-07-22 16:40:12.500817] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:26:53.090 [2024-07-22 16:40:12.500850] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:26:53.090 [2024-07-22 16:40:12.500861] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:26:53.090 [2024-07-22 16:40:12.501650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:26:53.090 [2024-07-22 16:40:12.501675] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:26:53.090 [2024-07-22 16:40:12.501688] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:26:53.090 [2024-07-22 16:40:12.502655] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:53.090 [2024-07-22 16:40:12.502675] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:26:53.090 [2024-07-22 16:40:12.502690] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.503663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:26:53.090 [2024-07-22 16:40:12.503681] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.504666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:26:53.090 [2024-07-22 16:40:12.504684] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:26:53.090 [2024-07-22 16:40:12.504693] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.504705] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.504815] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:26:53.090 [2024-07-22 16:40:12.504823] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.504832] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:26:53.090 [2024-07-22 16:40:12.508974] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:26:53.090 [2024-07-22 16:40:12.509702] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:26:53.090 [2024-07-22 16:40:12.510706] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:53.090 [2024-07-22 16:40:12.511707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:53.090 [2024-07-22 16:40:12.511853] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:53.090 [2024-07-22 16:40:12.512720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:26:53.090 [2024-07-22 16:40:12.512737] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:53.090 [2024-07-22 16:40:12.512746] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:26:53.090 [2024-07-22 16:40:12.512770] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:26:53.090 [2024-07-22 16:40:12.512783] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:26:53.090 [2024-07-22 16:40:12.512816] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:53.090 [2024-07-22 16:40:12.512826] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:53.090 [2024-07-22 16:40:12.512850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:53.090 [2024-07-22 16:40:12.512928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:26:53.090 [2024-07-22 16:40:12.512972] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:26:53.090 [2024-07-22 16:40:12.512984] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:26:53.090 [2024-07-22 16:40:12.512993] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:26:53.090 [2024-07-22 16:40:12.513001] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:26:53.090 [2024-07-22 16:40:12.513015] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:26:53.090 [2024-07-22 16:40:12.513024] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:26:53.090 [2024-07-22 16:40:12.513032] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:26:53.090 [2024-07-22 16:40:12.513047] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:26:53.090 [2024-07-22 16:40:12.513064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:26:53.090 [2024-07-22 16:40:12.513082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:26:53.090 [2024-07-22 16:40:12.513102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.090 [2024-07-22 16:40:12.513115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.090 [2024-07-22 16:40:12.513128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.090 [2024-07-22 16:40:12.513140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:53.090 [2024-07-22 16:40:12.513149] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:53.090 [2024-07-22 16:40:12.513165] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513204] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:26:53.091 [2024-07-22 16:40:12.513213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513225] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513240] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513365] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513382] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513397] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:26:53.091 [2024-07-22 16:40:12.513405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:26:53.091 [2024-07-22 16:40:12.513415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513448] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:26:53.091 [2024-07-22 16:40:12.513469] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513483] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513495] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:53.091 [2024-07-22 16:40:12.513503] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:53.091 [2024-07-22 16:40:12.513513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513577] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513589] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:53.091 [2024-07-22 16:40:12.513597] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:53.091 [2024-07-22 16:40:12.513606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513632] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513642] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513667] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513685] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:26:53.091 [2024-07-22 16:40:12.513692] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:26:53.091 [2024-07-22 16:40:12.513701] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:26:53.091 [2024-07-22 16:40:12.513731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.513857] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:26:53.091 [2024-07-22 16:40:12.513866] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:26:53.091 [2024-07-22 16:40:12.513872] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:26:53.091 [2024-07-22 16:40:12.513878] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:26:53.091 [2024-07-22 16:40:12.513887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:26:53.091 [2024-07-22 16:40:12.513898] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:26:53.091 [2024-07-22 16:40:12.513906] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:26:53.091 [2024-07-22 16:40:12.513915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513926] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:26:53.091 [2024-07-22 16:40:12.513933] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:53.091 [2024-07-22 16:40:12.513942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.513980] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:26:53.091 [2024-07-22 16:40:12.513990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:26:53.091 [2024-07-22 16:40:12.513999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:26:53.091 [2024-07-22 16:40:12.514010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.514031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.514047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:26:53.091 [2024-07-22 16:40:12.514061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:26:53.091 ===================================================== 00:26:53.091 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:53.091 ===================================================== 00:26:53.091 Controller Capabilities/Features 00:26:53.091 ================================ 00:26:53.091 Vendor ID: 4e58 00:26:53.091 Subsystem Vendor ID: 4e58 00:26:53.091 Serial Number: SPDK1 00:26:53.091 Model Number: SPDK bdev Controller 00:26:53.091 Firmware Version: 24.05.1 00:26:53.091 Recommended Arb Burst: 6 00:26:53.091 IEEE OUI Identifier: 8d 6b 50 00:26:53.091 Multi-path I/O 00:26:53.091 May have multiple subsystem ports: Yes 00:26:53.091 May have multiple controllers: Yes 00:26:53.091 Associated with SR-IOV VF: No 00:26:53.091 Max Data Transfer Size: 131072 00:26:53.091 Max Number of Namespaces: 32 00:26:53.091 Max Number of I/O Queues: 127 00:26:53.091 NVMe Specification Version (VS): 1.3 00:26:53.091 NVMe Specification Version (Identify): 1.3 00:26:53.091 Maximum Queue Entries: 256 00:26:53.091 Contiguous Queues Required: Yes 00:26:53.091 Arbitration Mechanisms Supported 00:26:53.091 Weighted Round Robin: Not Supported 00:26:53.091 Vendor Specific: Not Supported 00:26:53.091 Reset Timeout: 15000 ms 00:26:53.091 Doorbell Stride: 4 bytes 00:26:53.091 NVM Subsystem Reset: Not Supported 00:26:53.091 Command Sets Supported 00:26:53.091 NVM Command Set: Supported 00:26:53.091 Boot Partition: Not Supported 00:26:53.091 Memory Page Size Minimum: 4096 bytes 00:26:53.091 Memory Page Size Maximum: 4096 bytes 00:26:53.091 Persistent Memory Region: Not Supported 00:26:53.091 Optional Asynchronous Events Supported 00:26:53.091 Namespace Attribute Notices: Supported 00:26:53.091 Firmware Activation Notices: Not Supported 00:26:53.091 ANA Change Notices: Not Supported 00:26:53.091 PLE Aggregate Log Change Notices: Not Supported 00:26:53.091 LBA Status Info Alert Notices: Not Supported 00:26:53.091 EGE Aggregate Log Change Notices: Not Supported 00:26:53.091 Normal NVM Subsystem Shutdown event: Not Supported 00:26:53.091 Zone Descriptor Change Notices: Not Supported 00:26:53.092 Discovery Log Change Notices: Not Supported 00:26:53.092 Controller Attributes 00:26:53.092 128-bit Host Identifier: Supported 00:26:53.092 Non-Operational Permissive Mode: Not Supported 00:26:53.092 NVM Sets: Not Supported 00:26:53.092 Read Recovery Levels: Not Supported 00:26:53.092 Endurance Groups: Not Supported 00:26:53.092 Predictable Latency Mode: Not Supported 00:26:53.092 Traffic Based Keep ALive: Not Supported 00:26:53.092 Namespace Granularity: Not Supported 00:26:53.092 SQ Associations: Not Supported 00:26:53.092 UUID List: Not Supported 00:26:53.092 Multi-Domain Subsystem: Not Supported 00:26:53.092 Fixed Capacity Management: Not Supported 00:26:53.092 Variable Capacity Management: Not Supported 00:26:53.092 Delete Endurance Group: Not Supported 00:26:53.092 Delete NVM Set: Not Supported 00:26:53.092 Extended LBA Formats Supported: Not Supported 00:26:53.092 Flexible Data Placement Supported: Not Supported 00:26:53.092 00:26:53.092 Controller Memory Buffer Support 00:26:53.092 ================================ 00:26:53.092 Supported: No 00:26:53.092 00:26:53.092 Persistent Memory Region Support 00:26:53.092 ================================ 00:26:53.092 Supported: No 00:26:53.092 00:26:53.092 Admin Command Set Attributes 00:26:53.092 ============================ 00:26:53.092 Security Send/Receive: Not Supported 00:26:53.092 Format NVM: Not Supported 00:26:53.092 Firmware Activate/Download: Not Supported 00:26:53.092 Namespace Management: Not Supported 00:26:53.092 Device Self-Test: Not Supported 00:26:53.092 Directives: Not Supported 00:26:53.092 NVMe-MI: Not Supported 00:26:53.092 Virtualization Management: Not Supported 00:26:53.092 Doorbell Buffer Config: Not Supported 00:26:53.092 Get LBA Status Capability: Not Supported 00:26:53.092 Command & Feature Lockdown Capability: Not Supported 00:26:53.092 Abort Command Limit: 4 00:26:53.092 Async Event Request Limit: 4 00:26:53.092 Number of Firmware Slots: N/A 00:26:53.092 Firmware Slot 1 Read-Only: N/A 00:26:53.092 Firmware Activation Without Reset: N/A 00:26:53.092 Multiple Update Detection Support: N/A 00:26:53.092 Firmware Update Granularity: No Information Provided 00:26:53.092 Per-Namespace SMART Log: No 00:26:53.092 Asymmetric Namespace Access Log Page: Not Supported 00:26:53.092 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:26:53.092 Command Effects Log Page: Supported 00:26:53.092 Get Log Page Extended Data: Supported 00:26:53.092 Telemetry Log Pages: Not Supported 00:26:53.092 Persistent Event Log Pages: Not Supported 00:26:53.092 Supported Log Pages Log Page: May Support 00:26:53.092 Commands Supported & Effects Log Page: Not Supported 00:26:53.092 Feature Identifiers & Effects Log Page:May Support 00:26:53.092 NVMe-MI Commands & Effects Log Page: May Support 00:26:53.092 Data Area 4 for Telemetry Log: Not Supported 00:26:53.092 Error Log Page Entries Supported: 128 00:26:53.092 Keep Alive: Supported 00:26:53.092 Keep Alive Granularity: 10000 ms 00:26:53.092 00:26:53.092 NVM Command Set Attributes 00:26:53.092 ========================== 00:26:53.092 Submission Queue Entry Size 00:26:53.092 Max: 64 00:26:53.092 Min: 64 00:26:53.092 Completion Queue Entry Size 00:26:53.092 Max: 16 00:26:53.092 Min: 16 00:26:53.092 Number of Namespaces: 32 00:26:53.092 Compare Command: Supported 00:26:53.092 Write Uncorrectable Command: Not Supported 00:26:53.092 Dataset Management Command: Supported 00:26:53.092 Write Zeroes Command: Supported 00:26:53.092 Set Features Save Field: Not Supported 00:26:53.092 Reservations: Not Supported 00:26:53.092 Timestamp: Not Supported 00:26:53.092 Copy: Supported 00:26:53.092 Volatile Write Cache: Present 00:26:53.092 Atomic Write Unit (Normal): 1 00:26:53.092 Atomic Write Unit (PFail): 1 00:26:53.092 Atomic Compare & Write Unit: 1 00:26:53.092 Fused Compare & Write: Supported 00:26:53.092 Scatter-Gather List 00:26:53.092 SGL Command Set: Supported (Dword aligned) 00:26:53.092 SGL Keyed: Not Supported 00:26:53.092 SGL Bit Bucket Descriptor: Not Supported 00:26:53.092 SGL Metadata Pointer: Not Supported 00:26:53.092 Oversized SGL: Not Supported 00:26:53.092 SGL Metadata Address: Not Supported 00:26:53.092 SGL Offset: Not Supported 00:26:53.092 Transport SGL Data Block: Not Supported 00:26:53.092 Replay Protected Memory Block: Not Supported 00:26:53.092 00:26:53.092 Firmware Slot Information 00:26:53.092 ========================= 00:26:53.092 Active slot: 1 00:26:53.092 Slot 1 Firmware Revision: 24.05.1 00:26:53.092 00:26:53.092 00:26:53.092 Commands Supported and Effects 00:26:53.092 ============================== 00:26:53.092 Admin Commands 00:26:53.092 -------------- 00:26:53.092 Get Log Page (02h): Supported 00:26:53.092 Identify (06h): Supported 00:26:53.092 Abort (08h): Supported 00:26:53.092 Set Features (09h): Supported 00:26:53.092 Get Features (0Ah): Supported 00:26:53.092 Asynchronous Event Request (0Ch): Supported 00:26:53.092 Keep Alive (18h): Supported 00:26:53.092 I/O Commands 00:26:53.092 ------------ 00:26:53.092 Flush (00h): Supported LBA-Change 00:26:53.092 Write (01h): Supported LBA-Change 00:26:53.092 Read (02h): Supported 00:26:53.092 Compare (05h): Supported 00:26:53.092 Write Zeroes (08h): Supported LBA-Change 00:26:53.092 Dataset Management (09h): Supported LBA-Change 00:26:53.092 Copy (19h): Supported LBA-Change 00:26:53.092 Unknown (79h): Supported LBA-Change 00:26:53.092 Unknown (7Ah): Supported 00:26:53.092 00:26:53.092 Error Log 00:26:53.092 ========= 00:26:53.092 00:26:53.092 Arbitration 00:26:53.092 =========== 00:26:53.092 Arbitration Burst: 1 00:26:53.092 00:26:53.092 Power Management 00:26:53.092 ================ 00:26:53.092 Number of Power States: 1 00:26:53.092 Current Power State: Power State #0 00:26:53.092 Power State #0: 00:26:53.092 Max Power: 0.00 W 00:26:53.092 Non-Operational State: Operational 00:26:53.092 Entry Latency: Not Reported 00:26:53.092 Exit Latency: Not Reported 00:26:53.092 Relative Read Throughput: 0 00:26:53.092 Relative Read Latency: 0 00:26:53.092 Relative Write Throughput: 0 00:26:53.092 Relative Write Latency: 0 00:26:53.092 Idle Power: Not Reported 00:26:53.092 Active Power: Not Reported 00:26:53.092 Non-Operational Permissive Mode: Not Supported 00:26:53.092 00:26:53.092 Health Information 00:26:53.092 ================== 00:26:53.092 Critical Warnings: 00:26:53.092 Available Spare Space: OK 00:26:53.092 Temperature: OK 00:26:53.092 Device Reliability: OK 00:26:53.092 Read Only: No 00:26:53.092 Volatile Memory Backup: OK 00:26:53.092 Current Temperature: 0 Kelvin[2024-07-22 16:40:12.514186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:26:53.092 [2024-07-22 16:40:12.514203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:26:53.092 [2024-07-22 16:40:12.514256] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:26:53.092 [2024-07-22 16:40:12.514274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.092 [2024-07-22 16:40:12.514285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.092 [2024-07-22 16:40:12.514298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.092 [2024-07-22 16:40:12.514308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.092 [2024-07-22 16:40:12.514734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:53.092 [2024-07-22 16:40:12.514756] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:26:53.092 [2024-07-22 16:40:12.515741] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:53.092 [2024-07-22 16:40:12.515829] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:26:53.092 [2024-07-22 16:40:12.515843] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:26:53.092 [2024-07-22 16:40:12.516741] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:26:53.092 [2024-07-22 16:40:12.516765] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:26:53.092 [2024-07-22 16:40:12.516819] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:26:53.092 [2024-07-22 16:40:12.518780] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:53.092 (-273 Celsius) 00:26:53.092 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:53.092 Available Spare: 0% 00:26:53.092 Available Spare Threshold: 0% 00:26:53.092 Life Percentage Used: 0% 00:26:53.092 Data Units Read: 0 00:26:53.092 Data Units Written: 0 00:26:53.093 Host Read Commands: 0 00:26:53.093 Host Write Commands: 0 00:26:53.093 Controller Busy Time: 0 minutes 00:26:53.093 Power Cycles: 0 00:26:53.093 Power On Hours: 0 hours 00:26:53.093 Unsafe Shutdowns: 0 00:26:53.093 Unrecoverable Media Errors: 0 00:26:53.093 Lifetime Error Log Entries: 0 00:26:53.093 Warning Temperature Time: 0 minutes 00:26:53.093 Critical Temperature Time: 0 minutes 00:26:53.093 00:26:53.093 Number of Queues 00:26:53.093 ================ 00:26:53.093 Number of I/O Submission Queues: 127 00:26:53.093 Number of I/O Completion Queues: 127 00:26:53.093 00:26:53.093 Active Namespaces 00:26:53.093 ================= 00:26:53.093 Namespace ID:1 00:26:53.093 Error Recovery Timeout: Unlimited 00:26:53.093 Command Set Identifier: NVM (00h) 00:26:53.093 Deallocate: Supported 00:26:53.093 Deallocated/Unwritten Error: Not Supported 00:26:53.093 Deallocated Read Value: Unknown 00:26:53.093 Deallocate in Write Zeroes: Not Supported 00:26:53.093 Deallocated Guard Field: 0xFFFF 00:26:53.093 Flush: Supported 00:26:53.093 Reservation: Supported 00:26:53.093 Namespace Sharing Capabilities: Multiple Controllers 00:26:53.093 Size (in LBAs): 131072 (0GiB) 00:26:53.093 Capacity (in LBAs): 131072 (0GiB) 00:26:53.093 Utilization (in LBAs): 131072 (0GiB) 00:26:53.093 NGUID: 907079F8B6B44FE09758FC013C53ECD3 00:26:53.093 UUID: 907079f8-b6b4-4fe0-9758-fc013c53ecd3 00:26:53.093 Thin Provisioning: Not Supported 00:26:53.093 Per-NS Atomic Units: Yes 00:26:53.093 Atomic Boundary Size (Normal): 0 00:26:53.093 Atomic Boundary Size (PFail): 0 00:26:53.093 Atomic Boundary Offset: 0 00:26:53.093 Maximum Single Source Range Length: 65535 00:26:53.093 Maximum Copy Length: 65535 00:26:53.093 Maximum Source Range Count: 1 00:26:53.093 NGUID/EUI64 Never Reused: No 00:26:53.093 Namespace Write Protected: No 00:26:53.093 Number of LBA Formats: 1 00:26:53.093 Current LBA Format: LBA Format #00 00:26:53.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:53.093 00:26:53.093 16:40:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:26:53.093 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.351 [2024-07-22 16:40:12.750818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:58.613 Initializing NVMe Controllers 00:26:58.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:58.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:26:58.613 Initialization complete. Launching workers. 00:26:58.613 ======================================================== 00:26:58.613 Latency(us) 00:26:58.613 Device Information : IOPS MiB/s Average min max 00:26:58.613 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35282.80 137.82 3627.31 1165.69 8291.94 00:26:58.613 ======================================================== 00:26:58.613 Total : 35282.80 137.82 3627.31 1165.69 8291.94 00:26:58.613 00:26:58.613 [2024-07-22 16:40:17.774071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:58.613 16:40:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:26:58.613 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.613 [2024-07-22 16:40:18.011244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:03.879 Initializing NVMe Controllers 00:27:03.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:03.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:27:03.879 Initialization complete. Launching workers. 00:27:03.879 ======================================================== 00:27:03.879 Latency(us) 00:27:03.879 Device Information : IOPS MiB/s Average min max 00:27:03.879 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7984.46 6944.20 11968.89 00:27:03.879 ======================================================== 00:27:03.879 Total : 16051.18 62.70 7984.46 6944.20 11968.89 00:27:03.879 00:27:03.879 [2024-07-22 16:40:23.049779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:03.879 16:40:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:27:03.879 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.879 [2024-07-22 16:40:23.270890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:09.139 [2024-07-22 16:40:28.346292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:09.139 Initializing NVMe Controllers 00:27:09.139 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:09.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:09.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:27:09.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:27:09.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:27:09.139 Initialization complete. Launching workers. 00:27:09.139 Starting thread on core 2 00:27:09.139 Starting thread on core 3 00:27:09.139 Starting thread on core 1 00:27:09.139 16:40:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:27:09.139 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.139 [2024-07-22 16:40:28.650429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:12.417 [2024-07-22 16:40:31.704601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:12.417 Initializing NVMe Controllers 00:27:12.417 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:12.417 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:12.417 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:27:12.417 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:27:12.417 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:27:12.417 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:27:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:12.417 Initialization complete. Launching workers. 00:27:12.417 Starting thread on core 1 with urgent priority queue 00:27:12.417 Starting thread on core 2 with urgent priority queue 00:27:12.417 Starting thread on core 3 with urgent priority queue 00:27:12.417 Starting thread on core 0 with urgent priority queue 00:27:12.417 SPDK bdev Controller (SPDK1 ) core 0: 4988.67 IO/s 20.05 secs/100000 ios 00:27:12.417 SPDK bdev Controller (SPDK1 ) core 1: 5183.33 IO/s 19.29 secs/100000 ios 00:27:12.417 SPDK bdev Controller (SPDK1 ) core 2: 5753.00 IO/s 17.38 secs/100000 ios 00:27:12.417 SPDK bdev Controller (SPDK1 ) core 3: 5795.00 IO/s 17.26 secs/100000 ios 00:27:12.417 ======================================================== 00:27:12.417 00:27:12.417 16:40:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:12.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.417 [2024-07-22 16:40:32.003544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:12.417 Initializing NVMe Controllers 00:27:12.417 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:12.417 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:12.417 Namespace ID: 1 size: 0GB 00:27:12.417 Initialization complete. 00:27:12.417 INFO: using host memory buffer for IO 00:27:12.417 Hello world! 00:27:12.417 [2024-07-22 16:40:32.038202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:12.674 16:40:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:12.674 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.931 [2024-07-22 16:40:32.344781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:13.864 Initializing NVMe Controllers 00:27:13.864 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:13.864 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:13.864 Initialization complete. Launching workers. 00:27:13.864 submit (in ns) avg, min, max = 8892.0, 3487.8, 5997557.8 00:27:13.864 complete (in ns) avg, min, max = 25128.8, 2065.6, 5994118.9 00:27:13.864 00:27:13.864 Submit histogram 00:27:13.864 ================ 00:27:13.864 Range in us Cumulative Count 00:27:13.864 3.484 - 3.508: 0.0075% ( 1) 00:27:13.864 3.508 - 3.532: 0.4172% ( 55) 00:27:13.864 3.532 - 3.556: 1.5124% ( 147) 00:27:13.864 3.556 - 3.579: 4.1499% ( 354) 00:27:13.864 3.579 - 3.603: 10.0432% ( 791) 00:27:13.864 3.603 - 3.627: 18.5889% ( 1147) 00:27:13.864 3.627 - 3.650: 27.8796% ( 1247) 00:27:13.864 3.650 - 3.674: 35.6430% ( 1042) 00:27:13.864 3.674 - 3.698: 43.0562% ( 995) 00:27:13.864 3.698 - 3.721: 49.9776% ( 929) 00:27:13.864 3.721 - 3.745: 55.8486% ( 788) 00:27:13.864 3.745 - 3.769: 59.9464% ( 550) 00:27:13.864 3.769 - 3.793: 63.4332% ( 468) 00:27:13.864 3.793 - 3.816: 66.5996% ( 425) 00:27:13.864 3.816 - 3.840: 70.0268% ( 460) 00:27:13.864 3.840 - 3.864: 74.0203% ( 536) 00:27:13.864 3.864 - 3.887: 77.8051% ( 508) 00:27:13.864 3.887 - 3.911: 81.1504% ( 449) 00:27:13.864 3.911 - 3.935: 84.2050% ( 410) 00:27:13.864 3.935 - 3.959: 86.4551% ( 302) 00:27:13.864 3.959 - 3.982: 88.1687% ( 230) 00:27:13.864 3.982 - 4.006: 89.7184% ( 208) 00:27:13.864 4.006 - 4.030: 90.7167% ( 134) 00:27:13.864 4.030 - 4.053: 91.6406% ( 124) 00:27:13.864 4.053 - 4.077: 92.3931% ( 101) 00:27:13.864 4.077 - 4.101: 93.1232% ( 98) 00:27:13.864 4.101 - 4.124: 93.9279% ( 108) 00:27:13.864 4.124 - 4.148: 94.6431% ( 96) 00:27:13.864 4.148 - 4.172: 95.1572% ( 69) 00:27:13.864 4.172 - 4.196: 95.5744% ( 56) 00:27:13.864 4.196 - 4.219: 95.8426% ( 36) 00:27:13.864 4.219 - 4.243: 96.0587% ( 29) 00:27:13.864 4.243 - 4.267: 96.2077% ( 20) 00:27:13.864 4.267 - 4.290: 96.3791% ( 23) 00:27:13.864 4.290 - 4.314: 96.4536% ( 10) 00:27:13.864 4.314 - 4.338: 96.5951% ( 19) 00:27:13.864 4.338 - 4.361: 96.7218% ( 17) 00:27:13.864 4.361 - 4.385: 96.7889% ( 9) 00:27:13.864 4.385 - 4.409: 96.8857% ( 13) 00:27:13.864 4.409 - 4.433: 96.9453% ( 8) 00:27:13.864 4.433 - 4.456: 96.9826% ( 5) 00:27:13.864 4.456 - 4.480: 97.0049% ( 3) 00:27:13.864 4.480 - 4.504: 97.0869% ( 11) 00:27:13.864 4.527 - 4.551: 97.1018% ( 2) 00:27:13.864 4.551 - 4.575: 97.1241% ( 3) 00:27:13.864 4.575 - 4.599: 97.1390% ( 2) 00:27:13.864 4.599 - 4.622: 97.1465% ( 1) 00:27:13.864 4.646 - 4.670: 97.1539% ( 1) 00:27:13.864 4.670 - 4.693: 97.1763% ( 3) 00:27:13.864 4.693 - 4.717: 97.2061% ( 4) 00:27:13.864 4.717 - 4.741: 97.2135% ( 1) 00:27:13.864 4.741 - 4.764: 97.2582% ( 6) 00:27:13.864 4.764 - 4.788: 97.3029% ( 6) 00:27:13.864 4.788 - 4.812: 97.3327% ( 4) 00:27:13.864 4.812 - 4.836: 97.3700% ( 5) 00:27:13.864 4.836 - 4.859: 97.4370% ( 9) 00:27:13.864 4.859 - 4.883: 97.4966% ( 8) 00:27:13.864 4.883 - 4.907: 97.5414% ( 6) 00:27:13.864 4.907 - 4.930: 97.6308% ( 12) 00:27:13.864 4.930 - 4.954: 97.6755% ( 6) 00:27:13.864 4.954 - 4.978: 97.7276% ( 7) 00:27:13.864 4.978 - 5.001: 97.7500% ( 3) 00:27:13.864 5.001 - 5.025: 97.7872% ( 5) 00:27:13.864 5.025 - 5.049: 97.8021% ( 2) 00:27:13.864 5.049 - 5.073: 97.8394% ( 5) 00:27:13.864 5.073 - 5.096: 97.8543% ( 2) 00:27:13.864 5.096 - 5.120: 97.8617% ( 1) 00:27:13.864 5.120 - 5.144: 97.8692% ( 1) 00:27:13.864 5.144 - 5.167: 97.8766% ( 1) 00:27:13.864 5.167 - 5.191: 97.8990% ( 3) 00:27:13.864 5.191 - 5.215: 97.9139% ( 2) 00:27:13.864 5.239 - 5.262: 97.9288% ( 2) 00:27:13.864 5.262 - 5.286: 97.9362% ( 1) 00:27:13.864 5.286 - 5.310: 97.9511% ( 2) 00:27:13.864 5.310 - 5.333: 97.9660% ( 2) 00:27:13.864 5.357 - 5.381: 97.9735% ( 1) 00:27:13.864 5.381 - 5.404: 97.9809% ( 1) 00:27:13.864 5.499 - 5.523: 97.9958% ( 2) 00:27:13.864 5.523 - 5.547: 98.0107% ( 2) 00:27:13.864 5.570 - 5.594: 98.0182% ( 1) 00:27:13.864 5.641 - 5.665: 98.0256% ( 1) 00:27:13.864 5.665 - 5.689: 98.0331% ( 1) 00:27:13.864 5.713 - 5.736: 98.0405% ( 1) 00:27:13.864 5.926 - 5.950: 98.0480% ( 1) 00:27:13.864 5.973 - 5.997: 98.0554% ( 1) 00:27:13.864 5.997 - 6.021: 98.0629% ( 1) 00:27:13.864 6.068 - 6.116: 98.0703% ( 1) 00:27:13.864 6.163 - 6.210: 98.0778% ( 1) 00:27:13.864 6.210 - 6.258: 98.0927% ( 2) 00:27:13.864 6.353 - 6.400: 98.1001% ( 1) 00:27:13.864 6.447 - 6.495: 98.1076% ( 1) 00:27:13.864 6.542 - 6.590: 98.1150% ( 1) 00:27:13.864 6.732 - 6.779: 98.1225% ( 1) 00:27:13.864 6.779 - 6.827: 98.1299% ( 1) 00:27:13.864 6.921 - 6.969: 98.1374% ( 1) 00:27:13.864 6.969 - 7.016: 98.1448% ( 1) 00:27:13.864 7.064 - 7.111: 98.1523% ( 1) 00:27:13.864 7.111 - 7.159: 98.1597% ( 1) 00:27:13.864 7.159 - 7.206: 98.1672% ( 1) 00:27:13.864 7.206 - 7.253: 98.1746% ( 1) 00:27:13.864 7.396 - 7.443: 98.1821% ( 1) 00:27:13.864 7.443 - 7.490: 98.1895% ( 1) 00:27:13.864 7.490 - 7.538: 98.1970% ( 1) 00:27:13.864 7.680 - 7.727: 98.2044% ( 1) 00:27:13.864 7.727 - 7.775: 98.2119% ( 1) 00:27:13.864 7.822 - 7.870: 98.2193% ( 1) 00:27:13.864 7.964 - 8.012: 98.2268% ( 1) 00:27:13.864 8.012 - 8.059: 98.2491% ( 3) 00:27:13.864 8.154 - 8.201: 98.2715% ( 3) 00:27:13.864 8.201 - 8.249: 98.2789% ( 1) 00:27:13.864 8.249 - 8.296: 98.2864% ( 1) 00:27:13.864 8.296 - 8.344: 98.2938% ( 1) 00:27:13.864 8.391 - 8.439: 98.3013% ( 1) 00:27:13.864 8.439 - 8.486: 98.3087% ( 1) 00:27:13.864 8.628 - 8.676: 98.3162% ( 1) 00:27:13.864 8.676 - 8.723: 98.3385% ( 3) 00:27:13.864 8.723 - 8.770: 98.3460% ( 1) 00:27:13.864 8.818 - 8.865: 98.3534% ( 1) 00:27:13.864 8.913 - 8.960: 98.3609% ( 1) 00:27:13.864 9.007 - 9.055: 98.3684% ( 1) 00:27:13.864 9.055 - 9.102: 98.3758% ( 1) 00:27:13.864 9.102 - 9.150: 98.3982% ( 3) 00:27:13.864 9.150 - 9.197: 98.4056% ( 1) 00:27:13.864 9.197 - 9.244: 98.4205% ( 2) 00:27:13.864 9.244 - 9.292: 98.4280% ( 1) 00:27:13.864 9.292 - 9.339: 98.4429% ( 2) 00:27:13.864 9.339 - 9.387: 98.4503% ( 1) 00:27:13.864 9.434 - 9.481: 98.4578% ( 1) 00:27:13.864 9.481 - 9.529: 98.4652% ( 1) 00:27:13.864 9.529 - 9.576: 98.4727% ( 1) 00:27:13.864 9.576 - 9.624: 98.4801% ( 1) 00:27:13.864 9.671 - 9.719: 98.4876% ( 1) 00:27:13.864 9.766 - 9.813: 98.5025% ( 2) 00:27:13.864 9.813 - 9.861: 98.5099% ( 1) 00:27:13.864 9.908 - 9.956: 98.5174% ( 1) 00:27:13.864 10.003 - 10.050: 98.5323% ( 2) 00:27:13.864 10.145 - 10.193: 98.5472% ( 2) 00:27:13.864 10.193 - 10.240: 98.5546% ( 1) 00:27:13.864 10.240 - 10.287: 98.5621% ( 1) 00:27:13.864 10.382 - 10.430: 98.5695% ( 1) 00:27:13.864 10.430 - 10.477: 98.5770% ( 1) 00:27:13.864 10.524 - 10.572: 98.5844% ( 1) 00:27:13.864 10.572 - 10.619: 98.5993% ( 2) 00:27:13.864 10.619 - 10.667: 98.6068% ( 1) 00:27:13.864 10.667 - 10.714: 98.6217% ( 2) 00:27:13.864 10.714 - 10.761: 98.6291% ( 1) 00:27:13.864 10.809 - 10.856: 98.6366% ( 1) 00:27:13.864 11.046 - 11.093: 98.6440% ( 1) 00:27:13.864 11.425 - 11.473: 98.6515% ( 1) 00:27:13.864 11.615 - 11.662: 98.6589% ( 1) 00:27:13.864 11.852 - 11.899: 98.6664% ( 1) 00:27:13.864 12.136 - 12.231: 98.6813% ( 2) 00:27:13.864 12.231 - 12.326: 98.6887% ( 1) 00:27:13.864 12.326 - 12.421: 98.6962% ( 1) 00:27:13.864 12.516 - 12.610: 98.7111% ( 2) 00:27:13.864 12.990 - 13.084: 98.7185% ( 1) 00:27:13.864 13.084 - 13.179: 98.7260% ( 1) 00:27:13.864 13.274 - 13.369: 98.7409% ( 2) 00:27:13.864 13.369 - 13.464: 98.7558% ( 2) 00:27:13.864 13.559 - 13.653: 98.7632% ( 1) 00:27:13.864 13.653 - 13.748: 98.7707% ( 1) 00:27:13.864 13.748 - 13.843: 98.7856% ( 2) 00:27:13.864 13.938 - 14.033: 98.7930% ( 1) 00:27:13.864 14.033 - 14.127: 98.8005% ( 1) 00:27:13.864 14.317 - 14.412: 98.8079% ( 1) 00:27:13.864 14.601 - 14.696: 98.8154% ( 1) 00:27:13.864 14.696 - 14.791: 98.8303% ( 2) 00:27:13.864 14.791 - 14.886: 98.8526% ( 3) 00:27:13.864 15.550 - 15.644: 98.8601% ( 1) 00:27:13.864 17.067 - 17.161: 98.8675% ( 1) 00:27:13.864 17.161 - 17.256: 98.8899% ( 3) 00:27:13.864 17.256 - 17.351: 98.9048% ( 2) 00:27:13.864 17.351 - 17.446: 98.9197% ( 2) 00:27:13.865 17.446 - 17.541: 98.9569% ( 5) 00:27:13.865 17.541 - 17.636: 98.9942% ( 5) 00:27:13.865 17.636 - 17.730: 99.0463% ( 7) 00:27:13.865 17.730 - 17.825: 99.0985% ( 7) 00:27:13.865 17.825 - 17.920: 99.1432% ( 6) 00:27:13.865 17.920 - 18.015: 99.2550% ( 15) 00:27:13.865 18.015 - 18.110: 99.2997% ( 6) 00:27:13.865 18.110 - 18.204: 99.3965% ( 13) 00:27:13.865 18.204 - 18.299: 99.4785% ( 11) 00:27:13.865 18.299 - 18.394: 99.5455% ( 9) 00:27:13.865 18.394 - 18.489: 99.6126% ( 9) 00:27:13.865 18.489 - 18.584: 99.6647% ( 7) 00:27:13.865 18.584 - 18.679: 99.7169% ( 7) 00:27:13.865 18.679 - 18.773: 99.7467% ( 4) 00:27:13.865 18.868 - 18.963: 99.7616% ( 2) 00:27:13.865 18.963 - 19.058: 99.7690% ( 1) 00:27:13.865 19.153 - 19.247: 99.7914% ( 3) 00:27:13.865 19.342 - 19.437: 99.8063% ( 2) 00:27:13.865 19.532 - 19.627: 99.8137% ( 1) 00:27:13.865 19.627 - 19.721: 99.8212% ( 1) 00:27:13.865 22.661 - 22.756: 99.8286% ( 1) 00:27:13.865 23.324 - 23.419: 99.8361% ( 1) 00:27:13.865 24.273 - 24.462: 99.8435% ( 1) 00:27:13.865 24.462 - 24.652: 99.8510% ( 1) 00:27:13.865 25.031 - 25.221: 99.8584% ( 1) 00:27:13.865 25.410 - 25.600: 99.8659% ( 1) 00:27:13.865 28.824 - 29.013: 99.8733% ( 1) 00:27:13.865 30.151 - 30.341: 99.8808% ( 1) 00:27:13.865 3980.705 - 4004.978: 99.9627% ( 11) 00:27:13.865 4004.978 - 4029.250: 99.9925% ( 4) 00:27:13.865 5995.330 - 6019.603: 100.0000% ( 1) 00:27:13.865 00:27:13.865 Complete histogram 00:27:13.865 ================== 00:27:13.865 Range in us Cumulative Count 00:27:13.865 2.062 - 2.074: 4.2542% ( 571) 00:27:13.865 2.074 - 2.086: 33.0651% ( 3867) 00:27:13.865 2.086 - 2.098: 37.8558% ( 643) 00:27:13.865 2.098 - 2.110: 45.4850% ( 1024) 00:27:13.865 2.110 - 2.121: 58.4414% ( 1739) 00:27:13.865 2.121 - 2.133: 60.5349% ( 281) 00:27:13.865 2.133 - 2.145: 67.3670% ( 917) 00:27:13.865 2.145 - 2.157: 74.7281% ( 988) 00:27:13.865 2.157 - 2.169: 75.7860% ( 142) 00:27:13.865 2.169 - 2.181: 79.1015% ( 445) 00:27:13.865 2.181 - 2.193: 81.9326% ( 380) 00:27:13.865 2.193 - 2.204: 82.7075% ( 104) 00:27:13.865 2.204 - 2.216: 84.7787% ( 278) 00:27:13.865 2.216 - 2.228: 87.9973% ( 432) 00:27:13.865 2.228 - 2.240: 89.9866% ( 267) 00:27:13.865 2.240 - 2.252: 91.7970% ( 243) 00:27:13.865 2.252 - 2.264: 93.3616% ( 210) 00:27:13.865 2.264 - 2.276: 93.6746% ( 42) 00:27:13.865 2.276 - 2.287: 94.0545% ( 51) 00:27:13.865 2.287 - 2.299: 94.5090% ( 61) 00:27:13.865 2.299 - 2.311: 95.1125% ( 81) 00:27:13.865 2.311 - 2.323: 95.4478% ( 45) 00:27:13.865 2.323 - 2.335: 95.5744% ( 17) 00:27:13.865 2.335 - 2.347: 95.6117% ( 5) 00:27:13.865 2.347 - 2.359: 95.6862% ( 10) 00:27:13.865 2.359 - 2.370: 95.7458% ( 8) 00:27:13.865 2.370 - 2.382: 95.9321% ( 25) 00:27:13.865 2.382 - 2.394: 96.1705% ( 32) 00:27:13.865 2.394 - 2.406: 96.3493% ( 24) 00:27:13.865 2.406 - 2.418: 96.5206% ( 23) 00:27:13.865 2.418 - 2.430: 96.7069% ( 25) 00:27:13.865 2.430 - 2.441: 96.9677% ( 35) 00:27:13.865 2.441 - 2.453: 97.2061% ( 32) 00:27:13.865 2.453 - 2.465: 97.4370% ( 31) 00:27:13.865 2.465 - 2.477: 97.6233% ( 25) 00:27:13.865 2.477 - 2.489: 97.7947% ( 23) 00:27:13.865 2.489 - 2.501: 97.9064% ( 15) 00:27:13.865 2.501 - 2.513: 98.0405% ( 18) 00:27:13.865 2.513 - 2.524: 98.1001% ( 8) 00:27:13.865 2.524 - 2.536: 98.1895% ( 12) 00:27:13.865 2.536 - 2.548: 98.2715% ( 11) 00:27:13.865 2.548 - 2.560: 98.3460% ( 10) 00:27:13.865 2.560 - 2.572: 98.3982% ( 7) 00:27:13.865 2.572 - 2.584: 98.4056% ( 1) 00:27:13.865 2.584 - 2.596: 98.4131% ( 1) 00:27:13.865 2.596 - 2.607: 98.4280% ( 2) 00:27:13.865 2.619 - 2.631: 98.4503% ( 3) 00:27:13.865 2.643 - 2.655: 9[2024-07-22 16:40:33.364144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:13.865 8.4578% ( 1) 00:27:13.865 2.655 - 2.667: 98.4652% ( 1) 00:27:13.865 2.679 - 2.690: 98.4727% ( 1) 00:27:13.865 2.702 - 2.714: 98.4801% ( 1) 00:27:13.865 2.714 - 2.726: 98.4950% ( 2) 00:27:13.865 2.726 - 2.738: 98.5025% ( 1) 00:27:13.865 2.738 - 2.750: 98.5099% ( 1) 00:27:13.865 2.761 - 2.773: 98.5174% ( 1) 00:27:13.865 2.797 - 2.809: 98.5323% ( 2) 00:27:13.865 2.821 - 2.833: 98.5397% ( 1) 00:27:13.865 2.844 - 2.856: 98.5546% ( 2) 00:27:13.865 2.892 - 2.904: 98.5621% ( 1) 00:27:13.865 3.271 - 3.295: 98.5695% ( 1) 00:27:13.865 3.295 - 3.319: 98.5844% ( 2) 00:27:13.865 3.319 - 3.342: 98.5919% ( 1) 00:27:13.865 3.366 - 3.390: 98.5993% ( 1) 00:27:13.865 3.390 - 3.413: 98.6142% ( 2) 00:27:13.865 3.413 - 3.437: 98.6366% ( 3) 00:27:13.865 3.461 - 3.484: 98.6440% ( 1) 00:27:13.865 3.484 - 3.508: 98.6589% ( 2) 00:27:13.865 3.532 - 3.556: 98.6664% ( 1) 00:27:13.865 3.556 - 3.579: 98.6813% ( 2) 00:27:13.865 3.579 - 3.603: 98.6962% ( 2) 00:27:13.865 3.650 - 3.674: 98.7111% ( 2) 00:27:13.865 3.674 - 3.698: 98.7334% ( 3) 00:27:13.865 3.721 - 3.745: 98.7409% ( 1) 00:27:13.865 3.745 - 3.769: 98.7483% ( 1) 00:27:13.865 3.840 - 3.864: 98.7558% ( 1) 00:27:13.865 4.124 - 4.148: 98.7632% ( 1) 00:27:13.865 4.290 - 4.314: 98.7707% ( 1) 00:27:13.865 4.764 - 4.788: 98.7781% ( 1) 00:27:13.865 5.902 - 5.926: 98.7856% ( 1) 00:27:13.865 5.997 - 6.021: 98.7930% ( 1) 00:27:13.865 6.163 - 6.210: 98.8005% ( 1) 00:27:13.865 6.353 - 6.400: 98.8079% ( 1) 00:27:13.865 6.400 - 6.447: 98.8154% ( 1) 00:27:13.865 6.637 - 6.684: 98.8228% ( 1) 00:27:13.865 6.827 - 6.874: 98.8303% ( 1) 00:27:13.865 6.921 - 6.969: 98.8526% ( 3) 00:27:13.865 7.396 - 7.443: 98.8601% ( 1) 00:27:13.865 7.633 - 7.680: 98.8675% ( 1) 00:27:13.865 7.964 - 8.012: 98.8750% ( 1) 00:27:13.865 9.197 - 9.244: 98.8824% ( 1) 00:27:13.865 10.050 - 10.098: 98.8899% ( 1) 00:27:13.865 11.804 - 11.852: 98.8973% ( 1) 00:27:13.865 15.644 - 15.739: 98.9048% ( 1) 00:27:13.865 15.739 - 15.834: 98.9271% ( 3) 00:27:13.865 15.834 - 15.929: 98.9420% ( 2) 00:27:13.865 15.929 - 16.024: 98.9644% ( 3) 00:27:13.865 16.024 - 16.119: 99.0165% ( 7) 00:27:13.865 16.119 - 16.213: 99.0463% ( 4) 00:27:13.866 16.213 - 16.308: 99.0985% ( 7) 00:27:13.866 16.308 - 16.403: 99.1357% ( 5) 00:27:13.866 16.403 - 16.498: 99.1581% ( 3) 00:27:13.866 16.498 - 16.593: 99.2028% ( 6) 00:27:13.866 16.593 - 16.687: 99.2326% ( 4) 00:27:13.866 16.687 - 16.782: 99.2550% ( 3) 00:27:13.866 16.782 - 16.877: 99.2922% ( 5) 00:27:13.866 16.877 - 16.972: 99.3220% ( 4) 00:27:13.866 16.972 - 17.067: 99.3295% ( 1) 00:27:13.866 17.161 - 17.256: 99.3518% ( 3) 00:27:13.866 17.351 - 17.446: 99.3593% ( 1) 00:27:13.866 17.541 - 17.636: 99.3816% ( 3) 00:27:13.866 17.920 - 18.015: 99.3891% ( 1) 00:27:13.866 18.015 - 18.110: 99.3965% ( 1) 00:27:13.866 18.110 - 18.204: 99.4040% ( 1) 00:27:13.866 18.299 - 18.394: 99.4114% ( 1) 00:27:13.866 23.135 - 23.230: 99.4189% ( 1) 00:27:13.866 26.548 - 26.738: 99.4263% ( 1) 00:27:13.866 2014.625 - 2026.761: 99.4338% ( 1) 00:27:13.866 2536.486 - 2548.622: 99.4412% ( 1) 00:27:13.866 3932.160 - 3956.433: 99.4487% ( 1) 00:27:13.866 3980.705 - 4004.978: 99.9329% ( 65) 00:27:13.866 4004.978 - 4029.250: 99.9851% ( 7) 00:27:13.866 4975.881 - 5000.154: 99.9925% ( 1) 00:27:13.866 5971.058 - 5995.330: 100.0000% ( 1) 00:27:13.866 00:27:13.866 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:27:13.866 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:27:13.866 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:27:13.866 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:27:13.866 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:14.123 [ 00:27:14.123 { 00:27:14.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:14.123 "subtype": "Discovery", 00:27:14.123 "listen_addresses": [], 00:27:14.123 "allow_any_host": true, 00:27:14.123 "hosts": [] 00:27:14.123 }, 00:27:14.123 { 00:27:14.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:14.123 "subtype": "NVMe", 00:27:14.123 "listen_addresses": [ 00:27:14.123 { 00:27:14.123 "trtype": "VFIOUSER", 00:27:14.123 "adrfam": "IPv4", 00:27:14.123 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:14.123 "trsvcid": "0" 00:27:14.123 } 00:27:14.123 ], 00:27:14.123 "allow_any_host": true, 00:27:14.123 "hosts": [], 00:27:14.123 "serial_number": "SPDK1", 00:27:14.123 "model_number": "SPDK bdev Controller", 00:27:14.123 "max_namespaces": 32, 00:27:14.123 "min_cntlid": 1, 00:27:14.123 "max_cntlid": 65519, 00:27:14.123 "namespaces": [ 00:27:14.123 { 00:27:14.123 "nsid": 1, 00:27:14.123 "bdev_name": "Malloc1", 00:27:14.123 "name": "Malloc1", 00:27:14.123 "nguid": "907079F8B6B44FE09758FC013C53ECD3", 00:27:14.123 "uuid": "907079f8-b6b4-4fe0-9758-fc013c53ecd3" 00:27:14.123 } 00:27:14.123 ] 00:27:14.123 }, 00:27:14.123 { 00:27:14.123 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:14.123 "subtype": "NVMe", 00:27:14.123 "listen_addresses": [ 00:27:14.123 { 00:27:14.123 "trtype": "VFIOUSER", 00:27:14.123 "adrfam": "IPv4", 00:27:14.123 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:14.123 "trsvcid": "0" 00:27:14.123 } 00:27:14.123 ], 00:27:14.123 "allow_any_host": true, 00:27:14.123 "hosts": [], 00:27:14.123 "serial_number": "SPDK2", 00:27:14.123 "model_number": "SPDK bdev Controller", 00:27:14.123 "max_namespaces": 32, 00:27:14.123 "min_cntlid": 1, 00:27:14.123 "max_cntlid": 65519, 00:27:14.123 "namespaces": [ 00:27:14.123 { 00:27:14.123 "nsid": 1, 00:27:14.123 "bdev_name": "Malloc2", 00:27:14.123 "name": "Malloc2", 00:27:14.123 "nguid": "D2C0223569BE47C7AE2DB28A9A2D527A", 00:27:14.123 "uuid": "d2c02235-69be-47c7-ae2d-b28a9a2d527a" 00:27:14.123 } 00:27:14.123 ] 00:27:14.123 } 00:27:14.123 ] 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2757989 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:14.123 16:40:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:27:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.381 [2024-07-22 16:40:33.882536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:14.381 Malloc3 00:27:14.382 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:27:14.639 [2024-07-22 16:40:34.252263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:14.639 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:14.896 Asynchronous Event Request test 00:27:14.896 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:14.896 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:14.896 Registering asynchronous event callbacks... 00:27:14.896 Starting namespace attribute notice tests for all controllers... 00:27:14.896 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:14.896 aer_cb - Changed Namespace 00:27:14.896 Cleaning up... 00:27:14.896 [ 00:27:14.896 { 00:27:14.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:14.896 "subtype": "Discovery", 00:27:14.896 "listen_addresses": [], 00:27:14.897 "allow_any_host": true, 00:27:14.897 "hosts": [] 00:27:14.897 }, 00:27:14.897 { 00:27:14.897 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:14.897 "subtype": "NVMe", 00:27:14.897 "listen_addresses": [ 00:27:14.897 { 00:27:14.897 "trtype": "VFIOUSER", 00:27:14.897 "adrfam": "IPv4", 00:27:14.897 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:14.897 "trsvcid": "0" 00:27:14.897 } 00:27:14.897 ], 00:27:14.897 "allow_any_host": true, 00:27:14.897 "hosts": [], 00:27:14.897 "serial_number": "SPDK1", 00:27:14.897 "model_number": "SPDK bdev Controller", 00:27:14.897 "max_namespaces": 32, 00:27:14.897 "min_cntlid": 1, 00:27:14.897 "max_cntlid": 65519, 00:27:14.897 "namespaces": [ 00:27:14.897 { 00:27:14.897 "nsid": 1, 00:27:14.897 "bdev_name": "Malloc1", 00:27:14.897 "name": "Malloc1", 00:27:14.897 "nguid": "907079F8B6B44FE09758FC013C53ECD3", 00:27:14.897 "uuid": "907079f8-b6b4-4fe0-9758-fc013c53ecd3" 00:27:14.897 }, 00:27:14.897 { 00:27:14.897 "nsid": 2, 00:27:14.897 "bdev_name": "Malloc3", 00:27:14.897 "name": "Malloc3", 00:27:14.897 "nguid": "29CA82E8399645349E807949CC109DCD", 00:27:14.897 "uuid": "29ca82e8-3996-4534-9e80-7949cc109dcd" 00:27:14.897 } 00:27:14.897 ] 00:27:14.897 }, 00:27:14.897 { 00:27:14.897 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:14.897 "subtype": "NVMe", 00:27:14.897 "listen_addresses": [ 00:27:14.897 { 00:27:14.897 "trtype": "VFIOUSER", 00:27:14.897 "adrfam": "IPv4", 00:27:14.897 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:14.897 "trsvcid": "0" 00:27:14.897 } 00:27:14.897 ], 00:27:14.897 "allow_any_host": true, 00:27:14.897 "hosts": [], 00:27:14.897 "serial_number": "SPDK2", 00:27:14.897 "model_number": "SPDK bdev Controller", 00:27:14.897 "max_namespaces": 32, 00:27:14.897 "min_cntlid": 1, 00:27:14.897 "max_cntlid": 65519, 00:27:14.897 "namespaces": [ 00:27:14.897 { 00:27:14.897 "nsid": 1, 00:27:14.897 "bdev_name": "Malloc2", 00:27:14.897 "name": "Malloc2", 00:27:14.897 "nguid": "D2C0223569BE47C7AE2DB28A9A2D527A", 00:27:14.897 "uuid": "d2c02235-69be-47c7-ae2d-b28a9a2d527a" 00:27:14.897 } 00:27:14.897 ] 00:27:14.897 } 00:27:14.897 ] 00:27:14.897 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2757989 00:27:14.897 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:14.897 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:14.897 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:27:14.897 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:27:14.897 [2024-07-22 16:40:34.518268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:14.897 [2024-07-22 16:40:34.518305] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758010 ] 00:27:14.897 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.156 [2024-07-22 16:40:34.550145] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:27:15.156 [2024-07-22 16:40:34.559277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:15.156 [2024-07-22 16:40:34.559307] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe4a4ba6000 00:27:15.156 [2024-07-22 16:40:34.560261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.561269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.562294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.563301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.564312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.565312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.566326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.567337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:15.156 [2024-07-22 16:40:34.568346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:15.156 [2024-07-22 16:40:34.568368] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe4a395c000 00:27:15.156 [2024-07-22 16:40:34.569500] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:15.156 [2024-07-22 16:40:34.584651] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:27:15.156 [2024-07-22 16:40:34.584701] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:27:15.156 [2024-07-22 16:40:34.589817] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:15.156 [2024-07-22 16:40:34.589871] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:27:15.156 [2024-07-22 16:40:34.589978] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:27:15.156 [2024-07-22 16:40:34.590021] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:27:15.156 [2024-07-22 16:40:34.590033] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:27:15.156 [2024-07-22 16:40:34.590826] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:27:15.156 [2024-07-22 16:40:34.590850] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:27:15.156 [2024-07-22 16:40:34.590864] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:27:15.156 [2024-07-22 16:40:34.591827] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:15.156 [2024-07-22 16:40:34.591848] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:27:15.156 [2024-07-22 16:40:34.591861] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.592833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:27:15.156 [2024-07-22 16:40:34.592853] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.593846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:27:15.156 [2024-07-22 16:40:34.593865] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:27:15.156 [2024-07-22 16:40:34.593875] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.593886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.593997] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:27:15.156 [2024-07-22 16:40:34.594008] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.594017] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:27:15.156 [2024-07-22 16:40:34.594853] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:27:15.156 [2024-07-22 16:40:34.595859] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:27:15.156 [2024-07-22 16:40:34.596870] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:15.156 [2024-07-22 16:40:34.597860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:15.156 [2024-07-22 16:40:34.597947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:15.156 [2024-07-22 16:40:34.598879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:27:15.156 [2024-07-22 16:40:34.598899] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:15.156 [2024-07-22 16:40:34.598908] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:27:15.156 [2024-07-22 16:40:34.598931] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:27:15.157 [2024-07-22 16:40:34.598969] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.598999] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:15.157 [2024-07-22 16:40:34.599020] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:15.157 [2024-07-22 16:40:34.599039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.606981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.607008] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:27:15.157 [2024-07-22 16:40:34.607019] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:27:15.157 [2024-07-22 16:40:34.607027] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:27:15.157 [2024-07-22 16:40:34.607035] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:27:15.157 [2024-07-22 16:40:34.607043] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:27:15.157 [2024-07-22 16:40:34.607051] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:27:15.157 [2024-07-22 16:40:34.607060] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.607072] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.607089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.614975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.615000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.157 [2024-07-22 16:40:34.615018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.157 [2024-07-22 16:40:34.615031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.157 [2024-07-22 16:40:34.615043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.157 [2024-07-22 16:40:34.615052] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.615072] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.615089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.622990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.623010] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:27:15.157 [2024-07-22 16:40:34.623019] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.623031] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.623046] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.623061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.630975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.631049] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.631066] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.631080] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:27:15.157 [2024-07-22 16:40:34.631088] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:27:15.157 [2024-07-22 16:40:34.631098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.638977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.639001] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:27:15.157 [2024-07-22 16:40:34.639024] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.639037] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.639050] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:15.157 [2024-07-22 16:40:34.639058] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:15.157 [2024-07-22 16:40:34.639068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.646989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.647043] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.647061] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.647075] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:15.157 [2024-07-22 16:40:34.647084] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:15.157 [2024-07-22 16:40:34.647098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.654976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.655000] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655023] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655040] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655060] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655069] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:27:15.157 [2024-07-22 16:40:34.655077] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:27:15.157 [2024-07-22 16:40:34.655086] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:27:15.157 [2024-07-22 16:40:34.655115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.662975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.663004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.671005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.678976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.679002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.686979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.687032] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:27:15.157 [2024-07-22 16:40:34.687053] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:27:15.157 [2024-07-22 16:40:34.687060] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:27:15.157 [2024-07-22 16:40:34.687066] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:27:15.157 [2024-07-22 16:40:34.687077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:27:15.157 [2024-07-22 16:40:34.687090] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:27:15.157 [2024-07-22 16:40:34.687099] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:27:15.157 [2024-07-22 16:40:34.687108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.687124] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:27:15.157 [2024-07-22 16:40:34.687134] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:15.157 [2024-07-22 16:40:34.687143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.687156] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:27:15.157 [2024-07-22 16:40:34.687164] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:27:15.157 [2024-07-22 16:40:34.687173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:27:15.157 [2024-07-22 16:40:34.694990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:27:15.157 [2024-07-22 16:40:34.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:27:15.158 [2024-07-22 16:40:34.695033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:27:15.158 [2024-07-22 16:40:34.695048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:27:15.158 ===================================================== 00:27:15.158 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:15.158 ===================================================== 00:27:15.158 Controller Capabilities/Features 00:27:15.158 ================================ 00:27:15.158 Vendor ID: 4e58 00:27:15.158 Subsystem Vendor ID: 4e58 00:27:15.158 Serial Number: SPDK2 00:27:15.158 Model Number: SPDK bdev Controller 00:27:15.158 Firmware Version: 24.05.1 00:27:15.158 Recommended Arb Burst: 6 00:27:15.158 IEEE OUI Identifier: 8d 6b 50 00:27:15.158 Multi-path I/O 00:27:15.158 May have multiple subsystem ports: Yes 00:27:15.158 May have multiple controllers: Yes 00:27:15.158 Associated with SR-IOV VF: No 00:27:15.158 Max Data Transfer Size: 131072 00:27:15.158 Max Number of Namespaces: 32 00:27:15.158 Max Number of I/O Queues: 127 00:27:15.158 NVMe Specification Version (VS): 1.3 00:27:15.158 NVMe Specification Version (Identify): 1.3 00:27:15.158 Maximum Queue Entries: 256 00:27:15.158 Contiguous Queues Required: Yes 00:27:15.158 Arbitration Mechanisms Supported 00:27:15.158 Weighted Round Robin: Not Supported 00:27:15.158 Vendor Specific: Not Supported 00:27:15.158 Reset Timeout: 15000 ms 00:27:15.158 Doorbell Stride: 4 bytes 00:27:15.158 NVM Subsystem Reset: Not Supported 00:27:15.158 Command Sets Supported 00:27:15.158 NVM Command Set: Supported 00:27:15.158 Boot Partition: Not Supported 00:27:15.158 Memory Page Size Minimum: 4096 bytes 00:27:15.158 Memory Page Size Maximum: 4096 bytes 00:27:15.158 Persistent Memory Region: Not Supported 00:27:15.158 Optional Asynchronous Events Supported 00:27:15.158 Namespace Attribute Notices: Supported 00:27:15.158 Firmware Activation Notices: Not Supported 00:27:15.158 ANA Change Notices: Not Supported 00:27:15.158 PLE Aggregate Log Change Notices: Not Supported 00:27:15.158 LBA Status Info Alert Notices: Not Supported 00:27:15.158 EGE Aggregate Log Change Notices: Not Supported 00:27:15.158 Normal NVM Subsystem Shutdown event: Not Supported 00:27:15.158 Zone Descriptor Change Notices: Not Supported 00:27:15.158 Discovery Log Change Notices: Not Supported 00:27:15.158 Controller Attributes 00:27:15.158 128-bit Host Identifier: Supported 00:27:15.158 Non-Operational Permissive Mode: Not Supported 00:27:15.158 NVM Sets: Not Supported 00:27:15.158 Read Recovery Levels: Not Supported 00:27:15.158 Endurance Groups: Not Supported 00:27:15.158 Predictable Latency Mode: Not Supported 00:27:15.158 Traffic Based Keep ALive: Not Supported 00:27:15.158 Namespace Granularity: Not Supported 00:27:15.158 SQ Associations: Not Supported 00:27:15.158 UUID List: Not Supported 00:27:15.158 Multi-Domain Subsystem: Not Supported 00:27:15.158 Fixed Capacity Management: Not Supported 00:27:15.158 Variable Capacity Management: Not Supported 00:27:15.158 Delete Endurance Group: Not Supported 00:27:15.158 Delete NVM Set: Not Supported 00:27:15.158 Extended LBA Formats Supported: Not Supported 00:27:15.158 Flexible Data Placement Supported: Not Supported 00:27:15.158 00:27:15.158 Controller Memory Buffer Support 00:27:15.158 ================================ 00:27:15.158 Supported: No 00:27:15.158 00:27:15.158 Persistent Memory Region Support 00:27:15.158 ================================ 00:27:15.158 Supported: No 00:27:15.158 00:27:15.158 Admin Command Set Attributes 00:27:15.158 ============================ 00:27:15.158 Security Send/Receive: Not Supported 00:27:15.158 Format NVM: Not Supported 00:27:15.158 Firmware Activate/Download: Not Supported 00:27:15.158 Namespace Management: Not Supported 00:27:15.158 Device Self-Test: Not Supported 00:27:15.158 Directives: Not Supported 00:27:15.158 NVMe-MI: Not Supported 00:27:15.158 Virtualization Management: Not Supported 00:27:15.158 Doorbell Buffer Config: Not Supported 00:27:15.158 Get LBA Status Capability: Not Supported 00:27:15.158 Command & Feature Lockdown Capability: Not Supported 00:27:15.158 Abort Command Limit: 4 00:27:15.158 Async Event Request Limit: 4 00:27:15.158 Number of Firmware Slots: N/A 00:27:15.158 Firmware Slot 1 Read-Only: N/A 00:27:15.158 Firmware Activation Without Reset: N/A 00:27:15.158 Multiple Update Detection Support: N/A 00:27:15.158 Firmware Update Granularity: No Information Provided 00:27:15.158 Per-Namespace SMART Log: No 00:27:15.158 Asymmetric Namespace Access Log Page: Not Supported 00:27:15.158 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:27:15.158 Command Effects Log Page: Supported 00:27:15.158 Get Log Page Extended Data: Supported 00:27:15.158 Telemetry Log Pages: Not Supported 00:27:15.158 Persistent Event Log Pages: Not Supported 00:27:15.158 Supported Log Pages Log Page: May Support 00:27:15.158 Commands Supported & Effects Log Page: Not Supported 00:27:15.158 Feature Identifiers & Effects Log Page:May Support 00:27:15.158 NVMe-MI Commands & Effects Log Page: May Support 00:27:15.158 Data Area 4 for Telemetry Log: Not Supported 00:27:15.158 Error Log Page Entries Supported: 128 00:27:15.158 Keep Alive: Supported 00:27:15.158 Keep Alive Granularity: 10000 ms 00:27:15.158 00:27:15.158 NVM Command Set Attributes 00:27:15.158 ========================== 00:27:15.158 Submission Queue Entry Size 00:27:15.158 Max: 64 00:27:15.158 Min: 64 00:27:15.158 Completion Queue Entry Size 00:27:15.158 Max: 16 00:27:15.158 Min: 16 00:27:15.158 Number of Namespaces: 32 00:27:15.158 Compare Command: Supported 00:27:15.158 Write Uncorrectable Command: Not Supported 00:27:15.158 Dataset Management Command: Supported 00:27:15.158 Write Zeroes Command: Supported 00:27:15.158 Set Features Save Field: Not Supported 00:27:15.158 Reservations: Not Supported 00:27:15.158 Timestamp: Not Supported 00:27:15.158 Copy: Supported 00:27:15.158 Volatile Write Cache: Present 00:27:15.158 Atomic Write Unit (Normal): 1 00:27:15.158 Atomic Write Unit (PFail): 1 00:27:15.158 Atomic Compare & Write Unit: 1 00:27:15.158 Fused Compare & Write: Supported 00:27:15.158 Scatter-Gather List 00:27:15.158 SGL Command Set: Supported (Dword aligned) 00:27:15.158 SGL Keyed: Not Supported 00:27:15.158 SGL Bit Bucket Descriptor: Not Supported 00:27:15.158 SGL Metadata Pointer: Not Supported 00:27:15.158 Oversized SGL: Not Supported 00:27:15.158 SGL Metadata Address: Not Supported 00:27:15.158 SGL Offset: Not Supported 00:27:15.158 Transport SGL Data Block: Not Supported 00:27:15.158 Replay Protected Memory Block: Not Supported 00:27:15.158 00:27:15.158 Firmware Slot Information 00:27:15.159 ========================= 00:27:15.159 Active slot: 1 00:27:15.159 Slot 1 Firmware Revision: 24.05.1 00:27:15.159 00:27:15.159 00:27:15.159 Commands Supported and Effects 00:27:15.159 ============================== 00:27:15.159 Admin Commands 00:27:15.159 -------------- 00:27:15.159 Get Log Page (02h): Supported 00:27:15.159 Identify (06h): Supported 00:27:15.159 Abort (08h): Supported 00:27:15.159 Set Features (09h): Supported 00:27:15.159 Get Features (0Ah): Supported 00:27:15.159 Asynchronous Event Request (0Ch): Supported 00:27:15.159 Keep Alive (18h): Supported 00:27:15.159 I/O Commands 00:27:15.159 ------------ 00:27:15.159 Flush (00h): Supported LBA-Change 00:27:15.159 Write (01h): Supported LBA-Change 00:27:15.159 Read (02h): Supported 00:27:15.159 Compare (05h): Supported 00:27:15.159 Write Zeroes (08h): Supported LBA-Change 00:27:15.159 Dataset Management (09h): Supported LBA-Change 00:27:15.159 Copy (19h): Supported LBA-Change 00:27:15.159 Unknown (79h): Supported LBA-Change 00:27:15.159 Unknown (7Ah): Supported 00:27:15.159 00:27:15.159 Error Log 00:27:15.159 ========= 00:27:15.159 00:27:15.159 Arbitration 00:27:15.159 =========== 00:27:15.159 Arbitration Burst: 1 00:27:15.159 00:27:15.159 Power Management 00:27:15.159 ================ 00:27:15.159 Number of Power States: 1 00:27:15.159 Current Power State: Power State #0 00:27:15.159 Power State #0: 00:27:15.159 Max Power: 0.00 W 00:27:15.159 Non-Operational State: Operational 00:27:15.159 Entry Latency: Not Reported 00:27:15.159 Exit Latency: Not Reported 00:27:15.159 Relative Read Throughput: 0 00:27:15.159 Relative Read Latency: 0 00:27:15.159 Relative Write Throughput: 0 00:27:15.159 Relative Write Latency: 0 00:27:15.159 Idle Power: Not Reported 00:27:15.159 Active Power: Not Reported 00:27:15.159 Non-Operational Permissive Mode: Not Supported 00:27:15.159 00:27:15.159 Health Information 00:27:15.159 ================== 00:27:15.159 Critical Warnings: 00:27:15.159 Available Spare Space: OK 00:27:15.159 Temperature: OK 00:27:15.159 Device Reliability: OK 00:27:15.159 Read Only: No 00:27:15.159 Volatile Memory Backup: OK 00:27:15.159 Current Temperature: 0 Kelvin[2024-07-22 16:40:34.695168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:27:15.159 [2024-07-22 16:40:34.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:27:15.159 [2024-07-22 16:40:34.703018] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:27:15.159 [2024-07-22 16:40:34.703036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.159 [2024-07-22 16:40:34.703048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.159 [2024-07-22 16:40:34.703058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.159 [2024-07-22 16:40:34.703068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.159 [2024-07-22 16:40:34.703160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:15.159 [2024-07-22 16:40:34.703181] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:27:15.159 [2024-07-22 16:40:34.704163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:15.159 [2024-07-22 16:40:34.704235] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:27:15.159 [2024-07-22 16:40:34.704251] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:27:15.159 [2024-07-22 16:40:34.705177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:27:15.159 [2024-07-22 16:40:34.705202] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:27:15.159 [2024-07-22 16:40:34.705256] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:27:15.159 [2024-07-22 16:40:34.706461] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:15.159 (-273 Celsius) 00:27:15.159 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:15.159 Available Spare: 0% 00:27:15.159 Available Spare Threshold: 0% 00:27:15.159 Life Percentage Used: 0% 00:27:15.159 Data Units Read: 0 00:27:15.159 Data Units Written: 0 00:27:15.159 Host Read Commands: 0 00:27:15.159 Host Write Commands: 0 00:27:15.159 Controller Busy Time: 0 minutes 00:27:15.159 Power Cycles: 0 00:27:15.159 Power On Hours: 0 hours 00:27:15.159 Unsafe Shutdowns: 0 00:27:15.159 Unrecoverable Media Errors: 0 00:27:15.159 Lifetime Error Log Entries: 0 00:27:15.159 Warning Temperature Time: 0 minutes 00:27:15.159 Critical Temperature Time: 0 minutes 00:27:15.159 00:27:15.159 Number of Queues 00:27:15.159 ================ 00:27:15.159 Number of I/O Submission Queues: 127 00:27:15.159 Number of I/O Completion Queues: 127 00:27:15.159 00:27:15.159 Active Namespaces 00:27:15.159 ================= 00:27:15.159 Namespace ID:1 00:27:15.159 Error Recovery Timeout: Unlimited 00:27:15.159 Command Set Identifier: NVM (00h) 00:27:15.159 Deallocate: Supported 00:27:15.159 Deallocated/Unwritten Error: Not Supported 00:27:15.159 Deallocated Read Value: Unknown 00:27:15.159 Deallocate in Write Zeroes: Not Supported 00:27:15.159 Deallocated Guard Field: 0xFFFF 00:27:15.159 Flush: Supported 00:27:15.159 Reservation: Supported 00:27:15.159 Namespace Sharing Capabilities: Multiple Controllers 00:27:15.159 Size (in LBAs): 131072 (0GiB) 00:27:15.159 Capacity (in LBAs): 131072 (0GiB) 00:27:15.159 Utilization (in LBAs): 131072 (0GiB) 00:27:15.159 NGUID: D2C0223569BE47C7AE2DB28A9A2D527A 00:27:15.159 UUID: d2c02235-69be-47c7-ae2d-b28a9a2d527a 00:27:15.159 Thin Provisioning: Not Supported 00:27:15.159 Per-NS Atomic Units: Yes 00:27:15.159 Atomic Boundary Size (Normal): 0 00:27:15.159 Atomic Boundary Size (PFail): 0 00:27:15.159 Atomic Boundary Offset: 0 00:27:15.159 Maximum Single Source Range Length: 65535 00:27:15.159 Maximum Copy Length: 65535 00:27:15.159 Maximum Source Range Count: 1 00:27:15.159 NGUID/EUI64 Never Reused: No 00:27:15.159 Namespace Write Protected: No 00:27:15.159 Number of LBA Formats: 1 00:27:15.159 Current LBA Format: LBA Format #00 00:27:15.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:15.159 00:27:15.159 16:40:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:27:15.159 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.417 [2024-07-22 16:40:34.931739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:20.791 Initializing NVMe Controllers 00:27:20.791 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:20.791 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:20.791 Initialization complete. Launching workers. 00:27:20.791 ======================================================== 00:27:20.791 Latency(us) 00:27:20.791 Device Information : IOPS MiB/s Average min max 00:27:20.791 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35438.53 138.43 3611.19 1146.74 8932.57 00:27:20.791 ======================================================== 00:27:20.791 Total : 35438.53 138.43 3611.19 1146.74 8932.57 00:27:20.791 00:27:20.791 [2024-07-22 16:40:40.038381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:20.791 16:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:27:20.791 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.791 [2024-07-22 16:40:40.272003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:26.053 Initializing NVMe Controllers 00:27:26.053 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:26.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:26.053 Initialization complete. Launching workers. 00:27:26.053 ======================================================== 00:27:26.053 Latency(us) 00:27:26.053 Device Information : IOPS MiB/s Average min max 00:27:26.053 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33270.58 129.96 3847.32 1181.85 8329.27 00:27:26.053 ======================================================== 00:27:26.053 Total : 33270.58 129.96 3847.32 1181.85 8329.27 00:27:26.053 00:27:26.053 [2024-07-22 16:40:45.298918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:26.053 16:40:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:27:26.053 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.053 [2024-07-22 16:40:45.519973] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:31.315 [2024-07-22 16:40:50.660134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:31.315 Initializing NVMe Controllers 00:27:31.315 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:31.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:31.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:27:31.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:27:31.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:27:31.315 Initialization complete. Launching workers. 00:27:31.315 Starting thread on core 2 00:27:31.315 Starting thread on core 3 00:27:31.315 Starting thread on core 1 00:27:31.315 16:40:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:27:31.315 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.573 [2024-07-22 16:40:50.973460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:34.853 [2024-07-22 16:40:54.053468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:34.853 Initializing NVMe Controllers 00:27:34.853 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:34.853 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:34.853 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:27:34.853 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:27:34.853 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:27:34.853 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:27:34.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:34.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:34.853 Initialization complete. Launching workers. 00:27:34.853 Starting thread on core 1 with urgent priority queue 00:27:34.853 Starting thread on core 2 with urgent priority queue 00:27:34.853 Starting thread on core 3 with urgent priority queue 00:27:34.853 Starting thread on core 0 with urgent priority queue 00:27:34.853 SPDK bdev Controller (SPDK2 ) core 0: 5778.67 IO/s 17.31 secs/100000 ios 00:27:34.853 SPDK bdev Controller (SPDK2 ) core 1: 6048.67 IO/s 16.53 secs/100000 ios 00:27:34.853 SPDK bdev Controller (SPDK2 ) core 2: 6173.33 IO/s 16.20 secs/100000 ios 00:27:34.853 SPDK bdev Controller (SPDK2 ) core 3: 6733.33 IO/s 14.85 secs/100000 ios 00:27:34.853 ======================================================== 00:27:34.853 00:27:34.853 16:40:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.853 [2024-07-22 16:40:54.363525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:34.853 Initializing NVMe Controllers 00:27:34.853 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:34.853 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:34.853 Namespace ID: 1 size: 0GB 00:27:34.853 Initialization complete. 00:27:34.853 INFO: using host memory buffer for IO 00:27:34.853 Hello world! 00:27:34.853 [2024-07-22 16:40:54.372737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:34.853 16:40:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.111 [2024-07-22 16:40:54.677780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:36.484 Initializing NVMe Controllers 00:27:36.484 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:36.484 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:36.484 Initialization complete. Launching workers. 00:27:36.484 submit (in ns) avg, min, max = 5695.8, 3473.3, 4015846.7 00:27:36.484 complete (in ns) avg, min, max = 28268.6, 2043.3, 4017572.2 00:27:36.484 00:27:36.484 Submit histogram 00:27:36.484 ================ 00:27:36.484 Range in us Cumulative Count 00:27:36.484 3.461 - 3.484: 0.0150% ( 2) 00:27:36.484 3.484 - 3.508: 0.8602% ( 113) 00:27:36.484 3.508 - 3.532: 2.2290% ( 183) 00:27:36.484 3.532 - 3.556: 5.8045% ( 478) 00:27:36.484 3.556 - 3.579: 11.8558% ( 809) 00:27:36.484 3.579 - 3.603: 22.2455% ( 1389) 00:27:36.484 3.603 - 3.627: 31.5132% ( 1239) 00:27:36.484 3.627 - 3.650: 39.8085% ( 1109) 00:27:36.484 3.650 - 3.674: 45.7701% ( 797) 00:27:36.484 3.674 - 3.698: 51.8214% ( 809) 00:27:36.484 3.698 - 3.721: 57.7381% ( 791) 00:27:36.484 3.721 - 3.745: 61.6426% ( 522) 00:27:36.484 3.745 - 3.769: 64.5224% ( 385) 00:27:36.484 3.769 - 3.793: 67.3424% ( 377) 00:27:36.484 3.793 - 3.816: 70.7757% ( 459) 00:27:36.484 3.816 - 3.840: 74.8523% ( 545) 00:27:36.484 3.840 - 3.864: 79.2355% ( 586) 00:27:36.484 3.864 - 3.887: 82.5941% ( 449) 00:27:36.484 3.887 - 3.911: 85.2121% ( 350) 00:27:36.484 3.911 - 3.935: 87.4935% ( 305) 00:27:36.484 3.935 - 3.959: 89.1391% ( 220) 00:27:36.484 3.959 - 3.982: 90.2685% ( 151) 00:27:36.484 3.982 - 4.006: 91.2858% ( 136) 00:27:36.484 4.006 - 4.030: 92.2283% ( 126) 00:27:36.484 4.030 - 4.053: 93.1633% ( 125) 00:27:36.484 4.053 - 4.077: 94.0534% ( 119) 00:27:36.484 4.077 - 4.101: 94.7864% ( 98) 00:27:36.484 4.101 - 4.124: 95.3998% ( 82) 00:27:36.484 4.124 - 4.148: 95.8561% ( 61) 00:27:36.484 4.148 - 4.172: 96.1478% ( 39) 00:27:36.484 4.172 - 4.196: 96.4171% ( 36) 00:27:36.484 4.196 - 4.219: 96.6041% ( 25) 00:27:36.484 4.219 - 4.243: 96.7986% ( 26) 00:27:36.484 4.243 - 4.267: 96.9182% ( 16) 00:27:36.484 4.267 - 4.290: 97.0978% ( 24) 00:27:36.484 4.290 - 4.314: 97.2249% ( 17) 00:27:36.484 4.314 - 4.338: 97.3296% ( 14) 00:27:36.484 4.338 - 4.361: 97.4643% ( 18) 00:27:36.484 4.361 - 4.385: 97.5092% ( 6) 00:27:36.484 4.385 - 4.409: 97.5914% ( 11) 00:27:36.484 4.409 - 4.433: 97.6214% ( 4) 00:27:36.484 4.433 - 4.456: 97.6438% ( 3) 00:27:36.484 4.456 - 4.480: 97.6513% ( 1) 00:27:36.484 4.480 - 4.504: 97.6588% ( 1) 00:27:36.484 4.504 - 4.527: 97.6737% ( 2) 00:27:36.484 4.527 - 4.551: 97.6812% ( 1) 00:27:36.484 4.575 - 4.599: 97.6887% ( 1) 00:27:36.484 4.599 - 4.622: 97.6962% ( 1) 00:27:36.484 4.622 - 4.646: 97.7111% ( 2) 00:27:36.484 4.646 - 4.670: 97.7261% ( 2) 00:27:36.484 4.741 - 4.764: 97.7410% ( 2) 00:27:36.484 4.788 - 4.812: 97.7485% ( 1) 00:27:36.484 4.812 - 4.836: 97.7635% ( 2) 00:27:36.484 4.836 - 4.859: 97.7710% ( 1) 00:27:36.484 4.859 - 4.883: 97.8084% ( 5) 00:27:36.484 4.883 - 4.907: 97.8832% ( 10) 00:27:36.484 4.907 - 4.930: 97.9729% ( 12) 00:27:36.484 4.930 - 4.954: 98.0328% ( 8) 00:27:36.484 4.954 - 4.978: 98.0552% ( 3) 00:27:36.484 4.978 - 5.001: 98.1076% ( 7) 00:27:36.484 5.001 - 5.025: 98.1973% ( 12) 00:27:36.484 5.025 - 5.049: 98.2347% ( 5) 00:27:36.484 5.049 - 5.073: 98.3020% ( 9) 00:27:36.484 5.073 - 5.096: 98.3469% ( 6) 00:27:36.484 5.096 - 5.120: 98.3843% ( 5) 00:27:36.484 5.120 - 5.144: 98.4068% ( 3) 00:27:36.484 5.144 - 5.167: 98.4292% ( 3) 00:27:36.484 5.167 - 5.191: 98.4591% ( 4) 00:27:36.484 5.191 - 5.215: 98.4741% ( 2) 00:27:36.484 5.215 - 5.239: 98.4965% ( 3) 00:27:36.484 5.239 - 5.262: 98.5414% ( 6) 00:27:36.484 5.262 - 5.286: 98.5489% ( 1) 00:27:36.484 5.310 - 5.333: 98.5638% ( 2) 00:27:36.484 5.333 - 5.357: 98.5788% ( 2) 00:27:36.484 5.357 - 5.381: 98.5938% ( 2) 00:27:36.484 5.428 - 5.452: 98.6012% ( 1) 00:27:36.484 5.452 - 5.476: 98.6087% ( 1) 00:27:36.484 5.476 - 5.499: 98.6162% ( 1) 00:27:36.484 5.523 - 5.547: 98.6237% ( 1) 00:27:36.484 5.570 - 5.594: 98.6312% ( 1) 00:27:36.484 5.618 - 5.641: 98.6386% ( 1) 00:27:36.484 5.713 - 5.736: 98.6461% ( 1) 00:27:36.484 5.855 - 5.879: 98.6536% ( 1) 00:27:36.484 5.926 - 5.950: 98.6611% ( 1) 00:27:36.484 6.068 - 6.116: 98.6686% ( 1) 00:27:36.484 6.116 - 6.163: 98.6760% ( 1) 00:27:36.484 6.258 - 6.305: 98.6835% ( 1) 00:27:36.484 6.353 - 6.400: 98.6910% ( 1) 00:27:36.484 6.637 - 6.684: 98.6985% ( 1) 00:27:36.484 7.206 - 7.253: 98.7060% ( 1) 00:27:36.484 7.348 - 7.396: 98.7134% ( 1) 00:27:36.484 7.490 - 7.538: 98.7284% ( 2) 00:27:36.484 7.822 - 7.870: 98.7359% ( 1) 00:27:36.484 7.870 - 7.917: 98.7508% ( 2) 00:27:36.484 8.059 - 8.107: 98.7583% ( 1) 00:27:36.484 8.107 - 8.154: 98.7733% ( 2) 00:27:36.484 8.201 - 8.249: 98.7808% ( 1) 00:27:36.484 8.296 - 8.344: 98.7882% ( 1) 00:27:36.484 8.344 - 8.391: 98.8032% ( 2) 00:27:36.484 8.533 - 8.581: 98.8182% ( 2) 00:27:36.484 8.628 - 8.676: 98.8256% ( 1) 00:27:36.484 8.865 - 8.913: 98.8331% ( 1) 00:27:36.484 8.960 - 9.007: 98.8406% ( 1) 00:27:36.484 9.007 - 9.055: 98.8481% ( 1) 00:27:36.484 9.055 - 9.102: 98.8705% ( 3) 00:27:36.484 9.197 - 9.244: 98.8780% ( 1) 00:27:36.484 9.339 - 9.387: 98.8930% ( 2) 00:27:36.484 9.481 - 9.529: 98.9004% ( 1) 00:27:36.484 9.671 - 9.719: 98.9154% ( 2) 00:27:36.484 10.003 - 10.050: 98.9229% ( 1) 00:27:36.484 10.382 - 10.430: 98.9304% ( 1) 00:27:36.484 10.430 - 10.477: 98.9378% ( 1) 00:27:36.484 10.761 - 10.809: 98.9453% ( 1) 00:27:36.484 10.856 - 10.904: 98.9528% ( 1) 00:27:36.484 11.093 - 11.141: 98.9603% ( 1) 00:27:36.484 11.141 - 11.188: 98.9678% ( 1) 00:27:36.485 11.473 - 11.520: 98.9752% ( 1) 00:27:36.485 11.710 - 11.757: 98.9827% ( 1) 00:27:36.485 11.804 - 11.852: 98.9902% ( 1) 00:27:36.485 11.994 - 12.041: 98.9977% ( 1) 00:27:36.485 12.326 - 12.421: 99.0052% ( 1) 00:27:36.485 12.895 - 12.990: 99.0126% ( 1) 00:27:36.485 12.990 - 13.084: 99.0201% ( 1) 00:27:36.485 13.274 - 13.369: 99.0276% ( 1) 00:27:36.485 13.653 - 13.748: 99.0351% ( 1) 00:27:36.485 13.748 - 13.843: 99.0426% ( 1) 00:27:36.485 13.938 - 14.033: 99.0575% ( 2) 00:27:36.485 14.412 - 14.507: 99.0650% ( 1) 00:27:36.485 14.886 - 14.981: 99.0725% ( 1) 00:27:36.485 17.067 - 17.161: 99.0800% ( 1) 00:27:36.485 17.161 - 17.256: 99.1174% ( 5) 00:27:36.485 17.256 - 17.351: 99.1248% ( 1) 00:27:36.485 17.351 - 17.446: 99.1548% ( 4) 00:27:36.485 17.446 - 17.541: 99.1622% ( 1) 00:27:36.485 17.541 - 17.636: 99.1697% ( 1) 00:27:36.485 17.636 - 17.730: 99.2071% ( 5) 00:27:36.485 17.730 - 17.825: 99.2445% ( 5) 00:27:36.485 17.825 - 17.920: 99.3193% ( 10) 00:27:36.485 17.920 - 18.015: 99.4016% ( 11) 00:27:36.485 18.015 - 18.110: 99.4166% ( 2) 00:27:36.485 18.110 - 18.204: 99.4689% ( 7) 00:27:36.485 18.204 - 18.299: 99.5587% ( 12) 00:27:36.485 18.299 - 18.394: 99.6260% ( 9) 00:27:36.485 18.394 - 18.489: 99.7083% ( 11) 00:27:36.485 18.489 - 18.584: 99.7307% ( 3) 00:27:36.485 18.584 - 18.679: 99.7681% ( 5) 00:27:36.485 18.679 - 18.773: 99.7906% ( 3) 00:27:36.485 18.773 - 18.868: 99.8280% ( 5) 00:27:36.485 18.868 - 18.963: 99.8504% ( 3) 00:27:36.485 18.963 - 19.058: 99.8579% ( 1) 00:27:36.485 19.058 - 19.153: 99.8654% ( 1) 00:27:36.485 19.627 - 19.721: 99.8728% ( 1) 00:27:36.485 19.721 - 19.816: 99.8803% ( 1) 00:27:36.485 20.290 - 20.385: 99.8878% ( 1) 00:27:36.485 20.575 - 20.670: 99.8953% ( 1) 00:27:36.485 21.618 - 21.713: 99.9028% ( 1) 00:27:36.485 22.756 - 22.850: 99.9102% ( 1) 00:27:36.485 23.135 - 23.230: 99.9177% ( 1) 00:27:36.485 23.514 - 23.609: 99.9252% ( 1) 00:27:36.485 23.704 - 23.799: 99.9327% ( 1) 00:27:36.485 27.686 - 27.876: 99.9402% ( 1) 00:27:36.485 28.824 - 29.013: 99.9476% ( 1) 00:27:36.485 35.840 - 36.030: 99.9551% ( 1) 00:27:36.485 3980.705 - 4004.978: 99.9850% ( 4) 00:27:36.485 4004.978 - 4029.250: 100.0000% ( 2) 00:27:36.485 00:27:36.485 Complete histogram 00:27:36.485 ================== 00:27:36.485 Range in us Cumulative Count 00:27:36.485 2.039 - 2.050: 1.3688% ( 183) 00:27:36.485 2.050 - 2.062: 26.4118% ( 3348) 00:27:36.485 2.062 - 2.074: 36.2106% ( 1310) 00:27:36.485 2.074 - 2.086: 40.9978% ( 640) 00:27:36.485 2.086 - 2.098: 55.4492% ( 1932) 00:27:36.485 2.098 - 2.110: 59.9147% ( 597) 00:27:36.485 2.110 - 2.121: 64.7169% ( 642) 00:27:36.485 2.121 - 2.133: 72.1071% ( 988) 00:27:36.485 2.133 - 2.145: 73.6929% ( 212) 00:27:36.485 2.145 - 2.157: 76.5876% ( 387) 00:27:36.485 2.157 - 2.169: 80.3949% ( 509) 00:27:36.485 2.169 - 2.181: 81.3973% ( 134) 00:27:36.485 2.181 - 2.193: 83.1401% ( 233) 00:27:36.485 2.193 - 2.204: 86.6333% ( 467) 00:27:36.485 2.204 - 2.216: 88.8324% ( 294) 00:27:36.485 2.216 - 2.228: 90.9043% ( 277) 00:27:36.485 2.228 - 2.240: 92.7594% ( 248) 00:27:36.485 2.240 - 2.252: 93.3727% ( 82) 00:27:36.485 2.252 - 2.264: 93.7467% ( 50) 00:27:36.485 2.264 - 2.276: 94.0609% ( 42) 00:27:36.485 2.276 - 2.287: 94.8164% ( 101) 00:27:36.485 2.287 - 2.299: 95.1679% ( 47) 00:27:36.485 2.299 - 2.311: 95.2801% ( 15) 00:27:36.485 2.311 - 2.323: 95.3624% ( 11) 00:27:36.485 2.323 - 2.335: 95.3848% ( 3) 00:27:36.485 2.335 - 2.347: 95.4522% ( 9) 00:27:36.485 2.347 - 2.359: 95.5943% ( 19) 00:27:36.485 2.359 - 2.370: 95.9084% ( 42) 00:27:36.485 2.370 - 2.382: 96.1029% ( 26) 00:27:36.485 2.382 - 2.394: 96.2899% ( 25) 00:27:36.485 2.394 - 2.406: 96.5517% ( 35) 00:27:36.485 2.406 - 2.418: 96.7911% ( 32) 00:27:36.485 2.418 - 2.430: 97.0753% ( 38) 00:27:36.485 2.430 - 2.441: 97.3147% ( 32) 00:27:36.485 2.441 - 2.453: 97.4269% ( 15) 00:27:36.485 2.453 - 2.465: 97.5391% ( 15) 00:27:36.485 2.465 - 2.477: 97.6438% ( 14) 00:27:36.485 2.477 - 2.489: 97.7410% ( 13) 00:27:36.485 2.489 - 2.501: 97.8532% ( 15) 00:27:36.485 2.501 - 2.513: 97.9580% ( 14) 00:27:36.485 2.513 - 2.524: 98.0477% ( 12) 00:27:36.485 2.524 - 2.536: 98.1001% ( 7) 00:27:36.485 2.536 - 2.548: 98.1524% ( 7) 00:27:36.485 2.548 - 2.560: 98.1599% ( 1) 00:27:36.485 2.560 - 2.572: 98.1824% ( 3) 00:27:36.485 2.572 - 2.584: 98.1973% ( 2) 00:27:36.485 2.584 - 2.596: 98.2123% ( 2) 00:27:36.485 2.607 - 2.619: 98.2198% ( 1) 00:27:36.485 2.619 - 2.631: 98.2272% ( 1) 00:27:36.485 2.631 - 2.643: 98.2347% ( 1) 00:27:36.485 2.667 - 2.679: 98.2497% ( 2) 00:27:36.485 2.738 - 2.750: 98.2572% ( 1) 00:27:36.485 2.750 - 2.761: 98.2646% ( 1) 00:27:36.485 2.773 - 2.785: 98.2721% ( 1) 00:27:36.485 2.797 - 2.809: 98.2796% ( 1) 00:27:36.485 2.809 - 2.821: 98.2871% ( 1) 00:27:36.485 2.844 - 2.856: 98.2946% ( 1) 00:27:36.485 2.904 - 2.916: 98.3020% ( 1) 00:27:36.485 2.916 - 2.927: 98.3095% ( 1) 00:27:36.485 2.939 - 2.951: 98.3170% ( 1) 00:27:36.485 3.034 - 3.058: 98.3245% ( 1) 00:27:36.485 3.224 - 3.247: 98.3320% ( 1) 00:27:36.485 3.413 - 3.437: 98.3394% ( 1) 00:27:36.485 3.461 - 3.484: 98.3469% ( 1) 00:27:36.485 3.484 - 3.508: 98.3544% ( 1) 00:27:36.485 3.508 - 3.532: 98.3619% ( 1) 00:27:36.485 3.532 - 3.556: 98.3694% ( 1) 00:27:36.485 3.556 - 3.579: 98.3768% ( 1) 00:27:36.485 3.579 - 3.603: 98.3843% ( 1) 00:27:36.485 3.674 - 3.698: 98.3918% ( 1) 00:27:36.485 3.721 - 3.745: 98.4142% ( 3) 00:27:36.485 3.745 - 3.769: 98.4217% ( 1) 00:27:36.485 3.769 - 3.793: 98.4292% ( 1) 00:27:36.485 3.816 - 3.840: 98.4367% ( 1) 00:27:36.485 3.864 - 3.887: 98.4516% ( 2) 00:27:36.485 3.911 - 3.935: 98.4591% ( 1) 00:27:36.485 3.959 - 3.982: 98.4666% ( 1) 00:27:36.485 3.982 - 4.006: 98.4816% ( 2) 00:27:36.485 4.077 - 4.101: 98.4890% ( 1) 00:27:36.485 4.172 - 4.196: 98.4965% ( 1) 00:27:36.485 4.196 - 4.219: 98.5040% ( 1) 00:27:36.485 4.267 - 4.290: 98.5115% ( 1) 00:27:36.485 5.784 - 5.807: 98.5339% ( 3) 00:27:36.485 5.973 - 5.997: 98.5414% ( 1) 00:27:36.485 6.163 - 6.210: 98.5638% ( 3) 00:27:36.485 6.210 - 6.258: 98.5713% ( 1) 00:27:36.485 6.305 - 6.353: 9[2024-07-22 16:40:55.772833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:36.485 8.5788% ( 1) 00:27:36.485 6.400 - 6.447: 98.5863% ( 1) 00:27:36.485 6.542 - 6.590: 98.5938% ( 1) 00:27:36.485 6.590 - 6.637: 98.6012% ( 1) 00:27:36.485 6.732 - 6.779: 98.6162% ( 2) 00:27:36.485 6.779 - 6.827: 98.6312% ( 2) 00:27:36.485 7.301 - 7.348: 98.6386% ( 1) 00:27:36.485 7.443 - 7.490: 98.6461% ( 1) 00:27:36.485 7.538 - 7.585: 98.6536% ( 1) 00:27:36.485 7.680 - 7.727: 98.6611% ( 1) 00:27:36.485 9.007 - 9.055: 98.6686% ( 1) 00:27:36.485 9.529 - 9.576: 98.6760% ( 1) 00:27:36.485 9.766 - 9.813: 98.6835% ( 1) 00:27:36.485 13.084 - 13.179: 98.6910% ( 1) 00:27:36.485 15.455 - 15.550: 98.6985% ( 1) 00:27:36.485 15.550 - 15.644: 98.7209% ( 3) 00:27:36.485 15.644 - 15.739: 98.7508% ( 4) 00:27:36.485 15.739 - 15.834: 98.7808% ( 4) 00:27:36.485 15.834 - 15.929: 98.8032% ( 3) 00:27:36.485 15.929 - 16.024: 98.8331% ( 4) 00:27:36.485 16.024 - 16.119: 98.8406% ( 1) 00:27:36.485 16.119 - 16.213: 98.8780% ( 5) 00:27:36.485 16.213 - 16.308: 98.9004% ( 3) 00:27:36.485 16.308 - 16.403: 98.9304% ( 4) 00:27:36.485 16.403 - 16.498: 98.9902% ( 8) 00:27:36.485 16.498 - 16.593: 99.0052% ( 2) 00:27:36.485 16.593 - 16.687: 99.0426% ( 5) 00:27:36.485 16.687 - 16.782: 99.1024% ( 8) 00:27:36.485 16.782 - 16.877: 99.1622% ( 8) 00:27:36.485 16.877 - 16.972: 99.1922% ( 4) 00:27:36.485 16.972 - 17.067: 99.1996% ( 1) 00:27:36.485 17.067 - 17.161: 99.2146% ( 2) 00:27:36.485 17.161 - 17.256: 99.2221% ( 1) 00:27:36.485 17.256 - 17.351: 99.2595% ( 5) 00:27:36.485 17.351 - 17.446: 99.2744% ( 2) 00:27:36.485 17.730 - 17.825: 99.2819% ( 1) 00:27:36.485 17.825 - 17.920: 99.3044% ( 3) 00:27:36.485 18.110 - 18.204: 99.3193% ( 2) 00:27:36.485 18.394 - 18.489: 99.3268% ( 1) 00:27:36.485 18.679 - 18.773: 99.3418% ( 2) 00:27:36.485 20.385 - 20.480: 99.3492% ( 1) 00:27:36.485 3980.705 - 4004.978: 99.8354% ( 65) 00:27:36.485 4004.978 - 4029.250: 100.0000% ( 22) 00:27:36.485 00:27:36.485 16:40:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:27:36.486 16:40:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:36.486 16:40:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:27:36.486 16:40:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:27:36.486 16:40:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:36.486 [ 00:27:36.486 { 00:27:36.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:36.486 "subtype": "Discovery", 00:27:36.486 "listen_addresses": [], 00:27:36.486 "allow_any_host": true, 00:27:36.486 "hosts": [] 00:27:36.486 }, 00:27:36.486 { 00:27:36.486 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:36.486 "subtype": "NVMe", 00:27:36.486 "listen_addresses": [ 00:27:36.486 { 00:27:36.486 "trtype": "VFIOUSER", 00:27:36.486 "adrfam": "IPv4", 00:27:36.486 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:36.486 "trsvcid": "0" 00:27:36.486 } 00:27:36.486 ], 00:27:36.486 "allow_any_host": true, 00:27:36.486 "hosts": [], 00:27:36.486 "serial_number": "SPDK1", 00:27:36.486 "model_number": "SPDK bdev Controller", 00:27:36.486 "max_namespaces": 32, 00:27:36.486 "min_cntlid": 1, 00:27:36.486 "max_cntlid": 65519, 00:27:36.486 "namespaces": [ 00:27:36.486 { 00:27:36.486 "nsid": 1, 00:27:36.486 "bdev_name": "Malloc1", 00:27:36.486 "name": "Malloc1", 00:27:36.486 "nguid": "907079F8B6B44FE09758FC013C53ECD3", 00:27:36.486 "uuid": "907079f8-b6b4-4fe0-9758-fc013c53ecd3" 00:27:36.486 }, 00:27:36.486 { 00:27:36.486 "nsid": 2, 00:27:36.486 "bdev_name": "Malloc3", 00:27:36.486 "name": "Malloc3", 00:27:36.486 "nguid": "29CA82E8399645349E807949CC109DCD", 00:27:36.486 "uuid": "29ca82e8-3996-4534-9e80-7949cc109dcd" 00:27:36.486 } 00:27:36.486 ] 00:27:36.486 }, 00:27:36.486 { 00:27:36.486 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:36.486 "subtype": "NVMe", 00:27:36.486 "listen_addresses": [ 00:27:36.486 { 00:27:36.486 "trtype": "VFIOUSER", 00:27:36.486 "adrfam": "IPv4", 00:27:36.486 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:36.486 "trsvcid": "0" 00:27:36.486 } 00:27:36.486 ], 00:27:36.486 "allow_any_host": true, 00:27:36.486 "hosts": [], 00:27:36.486 "serial_number": "SPDK2", 00:27:36.486 "model_number": "SPDK bdev Controller", 00:27:36.486 "max_namespaces": 32, 00:27:36.486 "min_cntlid": 1, 00:27:36.486 "max_cntlid": 65519, 00:27:36.486 "namespaces": [ 00:27:36.486 { 00:27:36.486 "nsid": 1, 00:27:36.486 "bdev_name": "Malloc2", 00:27:36.486 "name": "Malloc2", 00:27:36.486 "nguid": "D2C0223569BE47C7AE2DB28A9A2D527A", 00:27:36.486 "uuid": "d2c02235-69be-47c7-ae2d-b28a9a2d527a" 00:27:36.486 } 00:27:36.486 ] 00:27:36.486 } 00:27:36.486 ] 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2760548 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:36.486 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:27:36.743 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.743 [2024-07-22 16:40:56.272471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:36.743 Malloc4 00:27:37.001 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:27:37.258 [2024-07-22 16:40:56.666486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:37.258 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:37.258 Asynchronous Event Request test 00:27:37.258 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.258 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.258 Registering asynchronous event callbacks... 00:27:37.258 Starting namespace attribute notice tests for all controllers... 00:27:37.258 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:37.258 aer_cb - Changed Namespace 00:27:37.258 Cleaning up... 00:27:37.515 [ 00:27:37.515 { 00:27:37.515 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:37.515 "subtype": "Discovery", 00:27:37.515 "listen_addresses": [], 00:27:37.515 "allow_any_host": true, 00:27:37.515 "hosts": [] 00:27:37.515 }, 00:27:37.515 { 00:27:37.515 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:37.515 "subtype": "NVMe", 00:27:37.515 "listen_addresses": [ 00:27:37.515 { 00:27:37.515 "trtype": "VFIOUSER", 00:27:37.515 "adrfam": "IPv4", 00:27:37.515 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:37.515 "trsvcid": "0" 00:27:37.515 } 00:27:37.515 ], 00:27:37.515 "allow_any_host": true, 00:27:37.515 "hosts": [], 00:27:37.515 "serial_number": "SPDK1", 00:27:37.515 "model_number": "SPDK bdev Controller", 00:27:37.515 "max_namespaces": 32, 00:27:37.515 "min_cntlid": 1, 00:27:37.515 "max_cntlid": 65519, 00:27:37.515 "namespaces": [ 00:27:37.515 { 00:27:37.515 "nsid": 1, 00:27:37.515 "bdev_name": "Malloc1", 00:27:37.515 "name": "Malloc1", 00:27:37.515 "nguid": "907079F8B6B44FE09758FC013C53ECD3", 00:27:37.515 "uuid": "907079f8-b6b4-4fe0-9758-fc013c53ecd3" 00:27:37.515 }, 00:27:37.515 { 00:27:37.515 "nsid": 2, 00:27:37.515 "bdev_name": "Malloc3", 00:27:37.515 "name": "Malloc3", 00:27:37.515 "nguid": "29CA82E8399645349E807949CC109DCD", 00:27:37.515 "uuid": "29ca82e8-3996-4534-9e80-7949cc109dcd" 00:27:37.515 } 00:27:37.515 ] 00:27:37.515 }, 00:27:37.515 { 00:27:37.515 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:37.515 "subtype": "NVMe", 00:27:37.515 "listen_addresses": [ 00:27:37.515 { 00:27:37.515 "trtype": "VFIOUSER", 00:27:37.515 "adrfam": "IPv4", 00:27:37.515 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:37.515 "trsvcid": "0" 00:27:37.515 } 00:27:37.515 ], 00:27:37.515 "allow_any_host": true, 00:27:37.515 "hosts": [], 00:27:37.515 "serial_number": "SPDK2", 00:27:37.515 "model_number": "SPDK bdev Controller", 00:27:37.515 "max_namespaces": 32, 00:27:37.515 "min_cntlid": 1, 00:27:37.515 "max_cntlid": 65519, 00:27:37.515 "namespaces": [ 00:27:37.515 { 00:27:37.515 "nsid": 1, 00:27:37.515 "bdev_name": "Malloc2", 00:27:37.515 "name": "Malloc2", 00:27:37.515 "nguid": "D2C0223569BE47C7AE2DB28A9A2D527A", 00:27:37.515 "uuid": "d2c02235-69be-47c7-ae2d-b28a9a2d527a" 00:27:37.515 }, 00:27:37.515 { 00:27:37.515 "nsid": 2, 00:27:37.515 "bdev_name": "Malloc4", 00:27:37.515 "name": "Malloc4", 00:27:37.515 "nguid": "ABACA86ABA5B4A929903AD49FD760043", 00:27:37.515 "uuid": "abaca86a-ba5b-4a92-9903-ad49fd760043" 00:27:37.515 } 00:27:37.515 ] 00:27:37.515 } 00:27:37.515 ] 00:27:37.515 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2760548 00:27:37.515 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:27:37.515 16:40:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2755054 00:27:37.515 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2755054 ']' 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2755054 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2755054 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2755054' 00:27:37.516 killing process with pid 2755054 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2755054 00:27:37.516 16:40:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2755054 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2760692 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2760692' 00:27:37.774 Process pid: 2760692 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2760692 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2760692 ']' 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.774 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:37.774 [2024-07-22 16:40:57.330761] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:37.774 [2024-07-22 16:40:57.331813] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:37.774 [2024-07-22 16:40:57.331874] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.774 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.774 [2024-07-22 16:40:57.405997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.032 [2024-07-22 16:40:57.498395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.032 [2024-07-22 16:40:57.498454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.032 [2024-07-22 16:40:57.498470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.032 [2024-07-22 16:40:57.498484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.032 [2024-07-22 16:40:57.498496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.032 [2024-07-22 16:40:57.498576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.032 [2024-07-22 16:40:57.498643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.032 [2024-07-22 16:40:57.498734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.032 [2024-07-22 16:40:57.498736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.032 [2024-07-22 16:40:57.605633] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:38.032 [2024-07-22 16:40:57.605848] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:38.032 [2024-07-22 16:40:57.606169] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:38.032 [2024-07-22 16:40:57.606784] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:38.032 [2024-07-22 16:40:57.607059] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:38.032 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.032 16:40:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:27:38.032 16:40:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:27:39.403 16:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:39.662 Malloc1 00:27:39.662 16:40:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:27:39.920 16:40:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:27:40.177 16:40:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:27:40.435 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:40.435 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:27:40.435 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:40.692 Malloc2 00:27:40.949 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:27:40.949 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:27:41.513 16:41:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2760692 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2760692 ']' 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2760692 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2760692 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2760692' 00:27:41.771 killing process with pid 2760692 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2760692 00:27:41.771 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2760692 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:42.029 00:27:42.029 real 0m52.772s 00:27:42.029 user 3m28.340s 00:27:42.029 sys 0m4.483s 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:42.029 ************************************ 00:27:42.029 END TEST nvmf_vfio_user 00:27:42.029 ************************************ 00:27:42.029 16:41:01 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:42.029 16:41:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:42.029 16:41:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:42.029 16:41:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:42.029 ************************************ 00:27:42.029 START TEST nvmf_vfio_user_nvme_compliance 00:27:42.029 ************************************ 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:42.029 * Looking for test storage... 00:27:42.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.029 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2761288 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2761288' 00:27:42.030 Process pid: 2761288 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2761288 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2761288 ']' 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.030 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:42.030 [2024-07-22 16:41:01.610685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:42.030 [2024-07-22 16:41:01.610782] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.030 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.030 [2024-07-22 16:41:01.679141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:42.288 [2024-07-22 16:41:01.762832] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.288 [2024-07-22 16:41:01.762884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.288 [2024-07-22 16:41:01.762912] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.288 [2024-07-22 16:41:01.762923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.288 [2024-07-22 16:41:01.762933] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.288 [2024-07-22 16:41:01.763032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.288 [2024-07-22 16:41:01.763091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.288 [2024-07-22 16:41:01.763095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.288 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.288 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:27:42.288 16:41:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.658 malloc0 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.658 16:41:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:27:43.658 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.658 00:27:43.658 00:27:43.658 CUnit - A unit testing framework for C - Version 2.1-3 00:27:43.658 http://cunit.sourceforge.net/ 00:27:43.658 00:27:43.658 00:27:43.658 Suite: nvme_compliance 00:27:43.658 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-22 16:41:03.114497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:43.658 [2024-07-22 16:41:03.115921] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:27:43.658 [2024-07-22 16:41:03.115960] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:27:43.658 [2024-07-22 16:41:03.115984] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:27:43.658 [2024-07-22 16:41:03.117518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:43.658 passed 00:27:43.658 Test: admin_identify_ctrlr_verify_fused ...[2024-07-22 16:41:03.203093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:43.658 [2024-07-22 16:41:03.206112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:43.658 passed 00:27:43.658 Test: admin_identify_ns ...[2024-07-22 16:41:03.293483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:43.915 [2024-07-22 16:41:03.352996] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:27:43.915 [2024-07-22 16:41:03.360998] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:27:43.915 [2024-07-22 16:41:03.382122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:43.915 passed 00:27:43.915 Test: admin_get_features_mandatory_features ...[2024-07-22 16:41:03.465772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:43.915 [2024-07-22 16:41:03.468791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:43.915 passed 00:27:43.915 Test: admin_get_features_optional_features ...[2024-07-22 16:41:03.553373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:43.915 [2024-07-22 16:41:03.556393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.172 passed 00:27:44.172 Test: admin_set_features_number_of_queues ...[2024-07-22 16:41:03.641614] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.172 [2024-07-22 16:41:03.746077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.172 passed 00:27:44.429 Test: admin_get_log_page_mandatory_logs ...[2024-07-22 16:41:03.829655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.429 [2024-07-22 16:41:03.832677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.429 passed 00:27:44.429 Test: admin_get_log_page_with_lpo ...[2024-07-22 16:41:03.913051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.429 [2024-07-22 16:41:03.982982] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:27:44.429 [2024-07-22 16:41:03.996086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.429 passed 00:27:44.429 Test: fabric_property_get ...[2024-07-22 16:41:04.079678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.686 [2024-07-22 16:41:04.080959] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:27:44.686 [2024-07-22 16:41:04.082689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.686 passed 00:27:44.686 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-22 16:41:04.164232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.686 [2024-07-22 16:41:04.165561] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:27:44.686 [2024-07-22 16:41:04.169281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.686 passed 00:27:44.686 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-22 16:41:04.250437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.943 [2024-07-22 16:41:04.338007] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:44.943 [2024-07-22 16:41:04.353989] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:44.943 [2024-07-22 16:41:04.359109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.943 passed 00:27:44.943 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-22 16:41:04.438674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:44.943 [2024-07-22 16:41:04.439936] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:27:44.943 [2024-07-22 16:41:04.441695] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:44.943 passed 00:27:44.943 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-22 16:41:04.528552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.200 [2024-07-22 16:41:04.604977] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:45.200 [2024-07-22 16:41:04.629004] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:45.200 [2024-07-22 16:41:04.634092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.200 passed 00:27:45.200 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-22 16:41:04.714802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.200 [2024-07-22 16:41:04.716124] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:27:45.200 [2024-07-22 16:41:04.716166] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:27:45.200 [2024-07-22 16:41:04.719838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.200 passed 00:27:45.200 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-22 16:41:04.801129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.457 [2024-07-22 16:41:04.896978] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:27:45.457 [2024-07-22 16:41:04.904977] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:27:45.457 [2024-07-22 16:41:04.912978] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:27:45.457 [2024-07-22 16:41:04.920979] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:27:45.457 [2024-07-22 16:41:04.950082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.457 passed 00:27:45.457 Test: admin_create_io_sq_verify_pc ...[2024-07-22 16:41:05.030714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.457 [2024-07-22 16:41:05.045987] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:27:45.457 [2024-07-22 16:41:05.063141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.457 passed 00:27:45.714 Test: admin_create_io_qp_max_qps ...[2024-07-22 16:41:05.150748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.647 [2024-07-22 16:41:06.254984] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:27:47.211 [2024-07-22 16:41:06.634754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:47.211 passed 00:27:47.211 Test: admin_create_io_sq_shared_cq ...[2024-07-22 16:41:06.718148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:47.211 [2024-07-22 16:41:06.849990] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:47.469 [2024-07-22 16:41:06.887062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:47.469 passed 00:27:47.469 00:27:47.469 Run Summary: Type Total Ran Passed Failed Inactive 00:27:47.469 suites 1 1 n/a 0 0 00:27:47.469 tests 18 18 18 0 0 00:27:47.469 asserts 360 360 360 0 n/a 00:27:47.469 00:27:47.469 Elapsed time = 1.564 seconds 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2761288 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2761288 ']' 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2761288 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2761288 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2761288' 00:27:47.469 killing process with pid 2761288 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2761288 00:27:47.469 16:41:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2761288 00:27:47.727 16:41:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:47.728 00:27:47.728 real 0m5.715s 00:27:47.728 user 0m16.171s 00:27:47.728 sys 0m0.543s 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:47.728 ************************************ 00:27:47.728 END TEST nvmf_vfio_user_nvme_compliance 00:27:47.728 ************************************ 00:27:47.728 16:41:07 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:47.728 16:41:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:47.728 16:41:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.728 16:41:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.728 ************************************ 00:27:47.728 START TEST nvmf_vfio_user_fuzz 00:27:47.728 ************************************ 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:47.728 * Looking for test storage... 00:27:47.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2762011 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2762011' 00:27:47.728 Process pid: 2762011 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2762011 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2762011 ']' 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.728 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:48.294 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.294 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:27:48.294 16:41:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.225 malloc0 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:27:49.225 16:41:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:28:21.289 Fuzzing completed. Shutting down the fuzz application 00:28:21.289 00:28:21.289 Dumping successful admin opcodes: 00:28:21.289 8, 9, 10, 24, 00:28:21.289 Dumping successful io opcodes: 00:28:21.289 0, 00:28:21.289 NS: 0x200003a1ef00 I/O qp, Total commands completed: 604700, total successful commands: 2339, random_seed: 3721433408 00:28:21.289 NS: 0x200003a1ef00 admin qp, Total commands completed: 98201, total successful commands: 801, random_seed: 514308160 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2762011 ']' 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2762011' 00:28:21.289 killing process with pid 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2762011 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:28:21.289 00:28:21.289 real 0m32.207s 00:28:21.289 user 0m31.110s 00:28:21.289 sys 0m29.800s 00:28:21.289 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.290 16:41:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:21.290 ************************************ 00:28:21.290 END TEST nvmf_vfio_user_fuzz 00:28:21.290 ************************************ 00:28:21.290 16:41:39 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:21.290 16:41:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:21.290 16:41:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.290 16:41:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:21.290 ************************************ 00:28:21.290 START TEST nvmf_host_management 00:28:21.290 ************************************ 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:21.290 * Looking for test storage... 00:28:21.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:28:21.290 16:41:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:22.664 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:22.664 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:22.664 Found net devices under 0000:82:00.0: cvl_0_0 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:22.664 Found net devices under 0000:82:00.1: cvl_0_1 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.664 16:41:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.664 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:28:22.665 00:28:22.665 --- 10.0.0.2 ping statistics --- 00:28:22.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.665 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:22.665 00:28:22.665 --- 10.0.0.1 ping statistics --- 00:28:22.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.665 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2767754 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2767754 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2767754 ']' 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.665 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.665 [2024-07-22 16:41:42.167028] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:22.665 [2024-07-22 16:41:42.167112] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.665 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.665 [2024-07-22 16:41:42.249356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.923 [2024-07-22 16:41:42.342739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.923 [2024-07-22 16:41:42.342794] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.923 [2024-07-22 16:41:42.342810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.923 [2024-07-22 16:41:42.342824] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.923 [2024-07-22 16:41:42.342835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.923 [2024-07-22 16:41:42.342938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.923 [2024-07-22 16:41:42.343026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.923 [2024-07-22 16:41:42.343097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.923 [2024-07-22 16:41:42.343094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.923 [2024-07-22 16:41:42.492771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:22.923 Malloc0 00:28:22.923 [2024-07-22 16:41:42.553708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.923 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2767901 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2767901 /var/tmp/bdevperf.sock 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2767901 ']' 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:23.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.181 { 00:28:23.181 "params": { 00:28:23.181 "name": "Nvme$subsystem", 00:28:23.181 "trtype": "$TEST_TRANSPORT", 00:28:23.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.181 "adrfam": "ipv4", 00:28:23.181 "trsvcid": "$NVMF_PORT", 00:28:23.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.181 "hdgst": ${hdgst:-false}, 00:28:23.181 "ddgst": ${ddgst:-false} 00:28:23.181 }, 00:28:23.181 "method": "bdev_nvme_attach_controller" 00:28:23.181 } 00:28:23.181 EOF 00:28:23.181 )") 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:28:23.181 16:41:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.181 "params": { 00:28:23.181 "name": "Nvme0", 00:28:23.181 "trtype": "tcp", 00:28:23.181 "traddr": "10.0.0.2", 00:28:23.181 "adrfam": "ipv4", 00:28:23.181 "trsvcid": "4420", 00:28:23.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.181 "hdgst": false, 00:28:23.181 "ddgst": false 00:28:23.181 }, 00:28:23.181 "method": "bdev_nvme_attach_controller" 00:28:23.181 }' 00:28:23.181 [2024-07-22 16:41:42.632812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:23.181 [2024-07-22 16:41:42.632886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767901 ] 00:28:23.181 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.181 [2024-07-22 16:41:42.704970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.181 [2024-07-22 16:41:42.791769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.439 Running I/O for 10 seconds... 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:23.696 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:28:23.697 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.956 [2024-07-22 16:41:43.472758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.472992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 [2024-07-22 16:41:43.473407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e980 is same with the state(5) to be set 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.956 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:23.957 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.957 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:23.957 [2024-07-22 16:41:43.477815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.477855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.477888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.477905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.477922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.477936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.477961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.477984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.957 [2024-07-22 16:41:43.478948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.957 [2024-07-22 16:41:43.478970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.478985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.958 [2024-07-22 16:41:43.479748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479845] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1113330 was disconnected and freed. reset controller. 00:28:23.958 [2024-07-22 16:41:43.479933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.958 [2024-07-22 16:41:43.479955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.479978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.958 [2024-07-22 16:41:43.479993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.480012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.958 [2024-07-22 16:41:43.480025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.480038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.958 [2024-07-22 16:41:43.480051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.958 [2024-07-22 16:41:43.480064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1118f00 is same with the state(5) to be set 00:28:23.958 [2024-07-22 16:41:43.481207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:23.958 task offset: 67968 on job bdev=Nvme0n1 fails 00:28:23.958 00:28:23.958 Latency(us) 00:28:23.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.958 Job: Nvme0n1 ended in about 0.39 seconds with error 00:28:23.958 Verification LBA range: start 0x0 length 0x400 00:28:23.958 Nvme0n1 : 0.39 1356.46 84.78 163.49 0.00 40891.71 2609.30 36311.80 00:28:23.958 =================================================================================================================== 00:28:23.958 Total : 1356.46 84.78 163.49 0.00 40891.71 2609.30 36311.80 00:28:23.958 [2024-07-22 16:41:43.483279] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:23.958 [2024-07-22 16:41:43.483308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1118f00 (9): Bad file descriptor 00:28:23.958 16:41:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.958 16:41:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:23.958 [2024-07-22 16:41:43.493924] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2767901 00:28:24.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2767901) - No such process 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.890 { 00:28:24.890 "params": { 00:28:24.890 "name": "Nvme$subsystem", 00:28:24.890 "trtype": "$TEST_TRANSPORT", 00:28:24.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.890 "adrfam": "ipv4", 00:28:24.890 "trsvcid": "$NVMF_PORT", 00:28:24.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.890 "hdgst": ${hdgst:-false}, 00:28:24.890 "ddgst": ${ddgst:-false} 00:28:24.890 }, 00:28:24.890 "method": "bdev_nvme_attach_controller" 00:28:24.890 } 00:28:24.890 EOF 00:28:24.890 )") 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:28:24.890 16:41:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.890 "params": { 00:28:24.890 "name": "Nvme0", 00:28:24.890 "trtype": "tcp", 00:28:24.890 "traddr": "10.0.0.2", 00:28:24.890 "adrfam": "ipv4", 00:28:24.890 "trsvcid": "4420", 00:28:24.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.890 "hdgst": false, 00:28:24.890 "ddgst": false 00:28:24.890 }, 00:28:24.890 "method": "bdev_nvme_attach_controller" 00:28:24.890 }' 00:28:24.890 [2024-07-22 16:41:44.530314] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:24.890 [2024-07-22 16:41:44.530392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768165 ] 00:28:25.148 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.148 [2024-07-22 16:41:44.600575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.148 [2024-07-22 16:41:44.689961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.406 Running I/O for 1 seconds... 00:28:26.339 00:28:26.339 Latency(us) 00:28:26.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.339 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.339 Verification LBA range: start 0x0 length 0x400 00:28:26.339 Nvme0n1 : 1.02 1439.51 89.97 0.00 0.00 43792.52 11602.30 34369.99 00:28:26.339 =================================================================================================================== 00:28:26.339 Total : 1439.51 89.97 0.00 0.00 43792.52 11602.30 34369.99 00:28:26.596 16:41:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:26.597 rmmod nvme_tcp 00:28:26.597 rmmod nvme_fabrics 00:28:26.597 rmmod nvme_keyring 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2767754 ']' 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2767754 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2767754 ']' 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2767754 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2767754 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2767754' 00:28:26.597 killing process with pid 2767754 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2767754 00:28:26.597 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2767754 00:28:26.854 [2024-07-22 16:41:46.434174] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.854 16:41:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.383 16:41:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.383 16:41:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:29.383 00:28:29.383 real 0m8.975s 00:28:29.383 user 0m19.380s 00:28:29.383 sys 0m2.979s 00:28:29.383 16:41:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.383 16:41:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:29.383 ************************************ 00:28:29.383 END TEST nvmf_host_management 00:28:29.383 ************************************ 00:28:29.383 16:41:48 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:29.383 16:41:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:29.383 16:41:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.383 16:41:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.383 ************************************ 00:28:29.383 START TEST nvmf_lvol 00:28:29.383 ************************************ 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:29.383 * Looking for test storage... 00:28:29.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.383 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.384 16:41:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:31.913 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:31.913 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:31.913 Found net devices under 0000:82:00.0: cvl_0_0 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:31.913 Found net devices under 0000:82:00.1: cvl_0_1 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:31.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:28:31.913 00:28:31.913 --- 10.0.0.2 ping statistics --- 00:28:31.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.913 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:31.913 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:31.913 00:28:31.913 --- 10.0.0.1 ping statistics --- 00:28:31.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.914 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2770615 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2770615 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2770615 ']' 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:31.914 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:31.914 [2024-07-22 16:41:51.342230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:31.914 [2024-07-22 16:41:51.342341] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.914 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.914 [2024-07-22 16:41:51.417060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:31.914 [2024-07-22 16:41:51.500549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.914 [2024-07-22 16:41:51.500599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.914 [2024-07-22 16:41:51.500628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.914 [2024-07-22 16:41:51.500639] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.914 [2024-07-22 16:41:51.500648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.914 [2024-07-22 16:41:51.500730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.914 [2024-07-22 16:41:51.500797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.914 [2024-07-22 16:41:51.500800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:32.171 16:41:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.172 16:41:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:32.429 [2024-07-22 16:41:51.889862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.429 16:41:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:32.687 16:41:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:32.687 16:41:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:32.945 16:41:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:32.945 16:41:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:33.202 16:41:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:33.460 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8120c79c-8250-4ba5-b026-d226002ba0d7 00:28:33.460 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8120c79c-8250-4ba5-b026-d226002ba0d7 lvol 20 00:28:33.718 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0021f5f0-1d12-4e43-b9e7-fefb97d00847 00:28:33.718 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:33.975 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0021f5f0-1d12-4e43-b9e7-fefb97d00847 00:28:34.233 16:41:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:34.490 [2024-07-22 16:41:54.043709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.490 16:41:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.747 16:41:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2770979 00:28:34.747 16:41:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:34.747 16:41:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:34.747 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.679 16:41:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0021f5f0-1d12-4e43-b9e7-fefb97d00847 MY_SNAPSHOT 00:28:36.244 16:41:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c73f236f-7ea0-4594-b2b5-da0b5a1d38b7 00:28:36.244 16:41:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0021f5f0-1d12-4e43-b9e7-fefb97d00847 30 00:28:36.502 16:41:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c73f236f-7ea0-4594-b2b5-da0b5a1d38b7 MY_CLONE 00:28:36.760 16:41:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=317769ff-a1c7-4fde-b8f8-1ce190abda77 00:28:36.760 16:41:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 317769ff-a1c7-4fde-b8f8-1ce190abda77 00:28:37.325 16:41:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2770979 00:28:45.430 Initializing NVMe Controllers 00:28:45.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:45.430 Controller IO queue size 128, less than required. 00:28:45.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:45.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:45.430 Initialization complete. Launching workers. 00:28:45.430 ======================================================== 00:28:45.430 Latency(us) 00:28:45.430 Device Information : IOPS MiB/s Average min max 00:28:45.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10543.80 41.19 12145.22 564.95 83501.39 00:28:45.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10418.50 40.70 12294.34 1969.34 74495.41 00:28:45.430 ======================================================== 00:28:45.430 Total : 20962.30 81.88 12219.34 564.95 83501.39 00:28:45.430 00:28:45.430 16:42:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:45.430 16:42:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0021f5f0-1d12-4e43-b9e7-fefb97d00847 00:28:45.688 16:42:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8120c79c-8250-4ba5-b026-d226002ba0d7 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.946 rmmod nvme_tcp 00:28:45.946 rmmod nvme_fabrics 00:28:45.946 rmmod nvme_keyring 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2770615 ']' 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2770615 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2770615 ']' 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2770615 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2770615 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2770615' 00:28:45.946 killing process with pid 2770615 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2770615 00:28:45.946 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2770615 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.512 16:42:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.414 16:42:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.414 00:28:48.414 real 0m19.340s 00:28:48.414 user 1m4.806s 00:28:48.414 sys 0m6.005s 00:28:48.414 16:42:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:48.414 16:42:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:48.414 ************************************ 00:28:48.414 END TEST nvmf_lvol 00:28:48.415 ************************************ 00:28:48.415 16:42:07 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:48.415 16:42:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:48.415 16:42:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:48.415 16:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.415 ************************************ 00:28:48.415 START TEST nvmf_lvs_grow 00:28:48.415 ************************************ 00:28:48.415 16:42:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:48.415 * Looking for test storage... 00:28:48.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:28:48.415 16:42:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:50.944 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.944 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:50.945 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:50.945 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:50.945 Found net devices under 0000:82:00.0: cvl_0_0 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:50.945 Found net devices under 0000:82:00.1: cvl_0_1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:50.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:28:50.945 00:28:50.945 --- 10.0.0.2 ping statistics --- 00:28:50.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.945 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:50.945 00:28:50.945 --- 10.0.0.1 ping statistics --- 00:28:50.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.945 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2775257 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2775257 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2775257 ']' 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.945 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:50.946 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:50.946 [2024-07-22 16:42:10.587047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:50.946 [2024-07-22 16:42:10.587120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.204 [2024-07-22 16:42:10.666894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.204 [2024-07-22 16:42:10.761744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.204 [2024-07-22 16:42:10.761820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.204 [2024-07-22 16:42:10.761838] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.204 [2024-07-22 16:42:10.761852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.204 [2024-07-22 16:42:10.761864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.204 [2024-07-22 16:42:10.761897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.462 16:42:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:51.719 [2024-07-22 16:42:11.182989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:51.719 ************************************ 00:28:51.719 START TEST lvs_grow_clean 00:28:51.719 ************************************ 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:51.719 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:51.720 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:51.720 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:51.720 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:51.977 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:51.977 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:52.235 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:28:52.235 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:28:52.235 16:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:52.493 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:52.493 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:52.493 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f lvol 150 00:28:52.750 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=91f690ce-bb8f-4e77-b617-4d2334242d88 00:28:52.751 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:52.751 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:53.008 [2024-07-22 16:42:12.503110] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:53.008 [2024-07-22 16:42:12.503216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:53.008 true 00:28:53.008 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:28:53.008 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:53.266 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:53.267 16:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:53.523 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 91f690ce-bb8f-4e77-b617-4d2334242d88 00:28:53.781 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:54.039 [2024-07-22 16:42:13.546295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.039 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2775647 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2775647 /var/tmp/bdevperf.sock 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2775647 ']' 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:54.297 16:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.297 [2024-07-22 16:42:13.896754] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:54.297 [2024-07-22 16:42:13.896836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775647 ] 00:28:54.297 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.555 [2024-07-22 16:42:13.974526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.555 [2024-07-22 16:42:14.065755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.555 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:54.555 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:28:54.555 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:55.119 Nvme0n1 00:28:55.119 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:55.377 [ 00:28:55.377 { 00:28:55.377 "name": "Nvme0n1", 00:28:55.377 "aliases": [ 00:28:55.377 "91f690ce-bb8f-4e77-b617-4d2334242d88" 00:28:55.377 ], 00:28:55.377 "product_name": "NVMe disk", 00:28:55.377 "block_size": 4096, 00:28:55.377 "num_blocks": 38912, 00:28:55.377 "uuid": "91f690ce-bb8f-4e77-b617-4d2334242d88", 00:28:55.377 "assigned_rate_limits": { 00:28:55.377 "rw_ios_per_sec": 0, 00:28:55.377 "rw_mbytes_per_sec": 0, 00:28:55.377 "r_mbytes_per_sec": 0, 00:28:55.377 "w_mbytes_per_sec": 0 00:28:55.377 }, 00:28:55.377 "claimed": false, 00:28:55.377 "zoned": false, 00:28:55.377 "supported_io_types": { 00:28:55.377 "read": true, 00:28:55.377 "write": true, 00:28:55.377 "unmap": true, 00:28:55.377 "write_zeroes": true, 00:28:55.377 "flush": true, 00:28:55.377 "reset": true, 00:28:55.377 "compare": true, 00:28:55.377 "compare_and_write": true, 00:28:55.377 "abort": true, 00:28:55.377 "nvme_admin": true, 00:28:55.377 "nvme_io": true 00:28:55.377 }, 00:28:55.377 "memory_domains": [ 00:28:55.377 { 00:28:55.377 "dma_device_id": "system", 00:28:55.377 "dma_device_type": 1 00:28:55.377 } 00:28:55.377 ], 00:28:55.377 "driver_specific": { 00:28:55.377 "nvme": [ 00:28:55.377 { 00:28:55.377 "trid": { 00:28:55.377 "trtype": "TCP", 00:28:55.377 "adrfam": "IPv4", 00:28:55.377 "traddr": "10.0.0.2", 00:28:55.377 "trsvcid": "4420", 00:28:55.377 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:55.377 }, 00:28:55.377 "ctrlr_data": { 00:28:55.377 "cntlid": 1, 00:28:55.377 "vendor_id": "0x8086", 00:28:55.377 "model_number": "SPDK bdev Controller", 00:28:55.377 "serial_number": "SPDK0", 00:28:55.377 "firmware_revision": "24.05.1", 00:28:55.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.377 "oacs": { 00:28:55.377 "security": 0, 00:28:55.377 "format": 0, 00:28:55.377 "firmware": 0, 00:28:55.377 "ns_manage": 0 00:28:55.377 }, 00:28:55.377 "multi_ctrlr": true, 00:28:55.377 "ana_reporting": false 00:28:55.377 }, 00:28:55.377 "vs": { 00:28:55.377 "nvme_version": "1.3" 00:28:55.377 }, 00:28:55.377 "ns_data": { 00:28:55.377 "id": 1, 00:28:55.377 "can_share": true 00:28:55.377 } 00:28:55.377 } 00:28:55.377 ], 00:28:55.377 "mp_policy": "active_passive" 00:28:55.377 } 00:28:55.377 } 00:28:55.377 ] 00:28:55.377 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2775709 00:28:55.377 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:55.377 16:42:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:55.377 Running I/O for 10 seconds... 00:28:56.310 Latency(us) 00:28:56.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:56.310 Nvme0n1 : 1.00 16162.00 63.13 0.00 0.00 0.00 0.00 0.00 00:28:56.310 =================================================================================================================== 00:28:56.310 Total : 16162.00 63.13 0.00 0.00 0.00 0.00 0.00 00:28:56.310 00:28:57.242 16:42:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:28:57.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.499 Nvme0n1 : 2.00 15952.00 62.31 0.00 0.00 0.00 0.00 0.00 00:28:57.499 =================================================================================================================== 00:28:57.499 Total : 15952.00 62.31 0.00 0.00 0.00 0.00 0.00 00:28:57.499 00:28:57.499 true 00:28:57.499 16:42:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:28:57.499 16:42:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:58.065 16:42:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:58.065 16:42:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:58.065 16:42:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2775709 00:28:58.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.322 Nvme0n1 : 3.00 15555.00 60.76 0.00 0.00 0.00 0.00 0.00 00:28:58.322 =================================================================================================================== 00:28:58.322 Total : 15555.00 60.76 0.00 0.00 0.00 0.00 0.00 00:28:58.322 00:28:59.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.694 Nvme0n1 : 4.00 15393.50 60.13 0.00 0.00 0.00 0.00 0.00 00:28:59.694 =================================================================================================================== 00:28:59.694 Total : 15393.50 60.13 0.00 0.00 0.00 0.00 0.00 00:28:59.694 00:29:00.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.626 Nvme0n1 : 5.00 15412.80 60.21 0.00 0.00 0.00 0.00 0.00 00:29:00.626 =================================================================================================================== 00:29:00.626 Total : 15412.80 60.21 0.00 0.00 0.00 0.00 0.00 00:29:00.626 00:29:01.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.560 Nvme0n1 : 6.00 15329.50 59.88 0.00 0.00 0.00 0.00 0.00 00:29:01.560 =================================================================================================================== 00:29:01.560 Total : 15329.50 59.88 0.00 0.00 0.00 0.00 0.00 00:29:01.560 00:29:02.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:02.494 Nvme0n1 : 7.00 15293.43 59.74 0.00 0.00 0.00 0.00 0.00 00:29:02.494 =================================================================================================================== 00:29:02.494 Total : 15293.43 59.74 0.00 0.00 0.00 0.00 0.00 00:29:02.494 00:29:03.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:03.430 Nvme0n1 : 8.00 15245.12 59.55 0.00 0.00 0.00 0.00 0.00 00:29:03.430 =================================================================================================================== 00:29:03.430 Total : 15245.12 59.55 0.00 0.00 0.00 0.00 0.00 00:29:03.430 00:29:04.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.418 Nvme0n1 : 9.00 15216.11 59.44 0.00 0.00 0.00 0.00 0.00 00:29:04.418 =================================================================================================================== 00:29:04.418 Total : 15216.11 59.44 0.00 0.00 0.00 0.00 0.00 00:29:04.418 00:29:05.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.393 Nvme0n1 : 10.00 15178.70 59.29 0.00 0.00 0.00 0.00 0.00 00:29:05.393 =================================================================================================================== 00:29:05.393 Total : 15178.70 59.29 0.00 0.00 0.00 0.00 0.00 00:29:05.393 00:29:05.393 00:29:05.393 Latency(us) 00:29:05.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.393 Nvme0n1 : 10.01 15178.94 59.29 0.00 0.00 8427.02 4587.52 16408.27 00:29:05.393 =================================================================================================================== 00:29:05.393 Total : 15178.94 59.29 0.00 0.00 8427.02 4587.52 16408.27 00:29:05.393 0 00:29:05.393 16:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2775647 00:29:05.393 16:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2775647 ']' 00:29:05.393 16:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2775647 00:29:05.393 16:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:29:05.393 16:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2775647 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2775647' 00:29:05.393 killing process with pid 2775647 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2775647 00:29:05.393 Received shutdown signal, test time was about 10.000000 seconds 00:29:05.393 00:29:05.393 Latency(us) 00:29:05.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.393 =================================================================================================================== 00:29:05.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.393 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2775647 00:29:05.651 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.909 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:06.167 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:06.167 16:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:06.424 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:06.424 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:06.424 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:06.991 [2024-07-22 16:42:26.394078] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:06.991 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:07.249 request: 00:29:07.249 { 00:29:07.249 "uuid": "677d027c-0c4c-405a-a99b-cbe2f445fa4f", 00:29:07.249 "method": "bdev_lvol_get_lvstores", 00:29:07.249 "req_id": 1 00:29:07.249 } 00:29:07.249 Got JSON-RPC error response 00:29:07.249 response: 00:29:07.249 { 00:29:07.249 "code": -19, 00:29:07.249 "message": "No such device" 00:29:07.249 } 00:29:07.249 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:29:07.249 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:07.249 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:07.249 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:07.249 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:07.507 aio_bdev 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 91f690ce-bb8f-4e77-b617-4d2334242d88 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=91f690ce-bb8f-4e77-b617-4d2334242d88 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:07.507 16:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:07.765 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 91f690ce-bb8f-4e77-b617-4d2334242d88 -t 2000 00:29:08.023 [ 00:29:08.023 { 00:29:08.023 "name": "91f690ce-bb8f-4e77-b617-4d2334242d88", 00:29:08.023 "aliases": [ 00:29:08.023 "lvs/lvol" 00:29:08.023 ], 00:29:08.023 "product_name": "Logical Volume", 00:29:08.023 "block_size": 4096, 00:29:08.023 "num_blocks": 38912, 00:29:08.023 "uuid": "91f690ce-bb8f-4e77-b617-4d2334242d88", 00:29:08.023 "assigned_rate_limits": { 00:29:08.023 "rw_ios_per_sec": 0, 00:29:08.023 "rw_mbytes_per_sec": 0, 00:29:08.023 "r_mbytes_per_sec": 0, 00:29:08.023 "w_mbytes_per_sec": 0 00:29:08.023 }, 00:29:08.023 "claimed": false, 00:29:08.023 "zoned": false, 00:29:08.023 "supported_io_types": { 00:29:08.023 "read": true, 00:29:08.023 "write": true, 00:29:08.023 "unmap": true, 00:29:08.023 "write_zeroes": true, 00:29:08.023 "flush": false, 00:29:08.023 "reset": true, 00:29:08.023 "compare": false, 00:29:08.023 "compare_and_write": false, 00:29:08.023 "abort": false, 00:29:08.023 "nvme_admin": false, 00:29:08.023 "nvme_io": false 00:29:08.023 }, 00:29:08.023 "driver_specific": { 00:29:08.023 "lvol": { 00:29:08.023 "lvol_store_uuid": "677d027c-0c4c-405a-a99b-cbe2f445fa4f", 00:29:08.023 "base_bdev": "aio_bdev", 00:29:08.023 "thin_provision": false, 00:29:08.023 "num_allocated_clusters": 38, 00:29:08.023 "snapshot": false, 00:29:08.023 "clone": false, 00:29:08.023 "esnap_clone": false 00:29:08.023 } 00:29:08.023 } 00:29:08.023 } 00:29:08.023 ] 00:29:08.023 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:29:08.023 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:08.023 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:08.281 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:08.281 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:08.281 16:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:08.540 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:08.540 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 91f690ce-bb8f-4e77-b617-4d2334242d88 00:29:08.798 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 677d027c-0c4c-405a-a99b-cbe2f445fa4f 00:29:09.056 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:09.314 00:29:09.314 real 0m17.618s 00:29:09.314 user 0m17.096s 00:29:09.314 sys 0m1.937s 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.314 ************************************ 00:29:09.314 END TEST lvs_grow_clean 00:29:09.314 ************************************ 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:09.314 ************************************ 00:29:09.314 START TEST lvs_grow_dirty 00:29:09.314 ************************************ 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:09.314 16:42:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:09.573 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:09.573 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:09.831 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:09.831 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:09.831 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:10.089 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:10.089 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:10.089 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 lvol 150 00:29:10.347 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:10.347 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:10.347 16:42:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:10.605 [2024-07-22 16:42:30.186268] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:10.605 [2024-07-22 16:42:30.186356] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:10.605 true 00:29:10.605 16:42:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:10.605 16:42:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:10.863 16:42:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:10.863 16:42:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:11.121 16:42:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:11.379 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:11.637 [2024-07-22 16:42:31.285597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.895 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.895 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2777746 00:29:11.895 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:11.895 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2777746 /var/tmp/bdevperf.sock 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2777746 ']' 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:12.154 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:12.154 [2024-07-22 16:42:31.588778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:12.154 [2024-07-22 16:42:31.588853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777746 ] 00:29:12.154 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.154 [2024-07-22 16:42:31.661230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.154 [2024-07-22 16:42:31.752272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.412 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:12.412 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:29:12.412 16:42:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:12.670 Nvme0n1 00:29:12.670 16:42:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:12.928 [ 00:29:12.928 { 00:29:12.928 "name": "Nvme0n1", 00:29:12.928 "aliases": [ 00:29:12.928 "a7b5807c-035b-4fc5-81d5-2f6aae1b6859" 00:29:12.928 ], 00:29:12.928 "product_name": "NVMe disk", 00:29:12.928 "block_size": 4096, 00:29:12.928 "num_blocks": 38912, 00:29:12.928 "uuid": "a7b5807c-035b-4fc5-81d5-2f6aae1b6859", 00:29:12.928 "assigned_rate_limits": { 00:29:12.928 "rw_ios_per_sec": 0, 00:29:12.928 "rw_mbytes_per_sec": 0, 00:29:12.928 "r_mbytes_per_sec": 0, 00:29:12.928 "w_mbytes_per_sec": 0 00:29:12.928 }, 00:29:12.928 "claimed": false, 00:29:12.928 "zoned": false, 00:29:12.928 "supported_io_types": { 00:29:12.928 "read": true, 00:29:12.928 "write": true, 00:29:12.928 "unmap": true, 00:29:12.928 "write_zeroes": true, 00:29:12.928 "flush": true, 00:29:12.928 "reset": true, 00:29:12.928 "compare": true, 00:29:12.928 "compare_and_write": true, 00:29:12.928 "abort": true, 00:29:12.928 "nvme_admin": true, 00:29:12.928 "nvme_io": true 00:29:12.928 }, 00:29:12.928 "memory_domains": [ 00:29:12.928 { 00:29:12.928 "dma_device_id": "system", 00:29:12.928 "dma_device_type": 1 00:29:12.928 } 00:29:12.928 ], 00:29:12.928 "driver_specific": { 00:29:12.928 "nvme": [ 00:29:12.928 { 00:29:12.928 "trid": { 00:29:12.928 "trtype": "TCP", 00:29:12.928 "adrfam": "IPv4", 00:29:12.928 "traddr": "10.0.0.2", 00:29:12.928 "trsvcid": "4420", 00:29:12.928 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:12.928 }, 00:29:12.928 "ctrlr_data": { 00:29:12.928 "cntlid": 1, 00:29:12.928 "vendor_id": "0x8086", 00:29:12.928 "model_number": "SPDK bdev Controller", 00:29:12.928 "serial_number": "SPDK0", 00:29:12.928 "firmware_revision": "24.05.1", 00:29:12.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.928 "oacs": { 00:29:12.928 "security": 0, 00:29:12.928 "format": 0, 00:29:12.928 "firmware": 0, 00:29:12.928 "ns_manage": 0 00:29:12.928 }, 00:29:12.928 "multi_ctrlr": true, 00:29:12.928 "ana_reporting": false 00:29:12.928 }, 00:29:12.928 "vs": { 00:29:12.928 "nvme_version": "1.3" 00:29:12.928 }, 00:29:12.928 "ns_data": { 00:29:12.928 "id": 1, 00:29:12.928 "can_share": true 00:29:12.928 } 00:29:12.928 } 00:29:12.928 ], 00:29:12.928 "mp_policy": "active_passive" 00:29:12.928 } 00:29:12.928 } 00:29:12.928 ] 00:29:12.928 16:42:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2777884 00:29:12.928 16:42:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:12.928 16:42:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:13.186 Running I/O for 10 seconds... 00:29:14.128 Latency(us) 00:29:14.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.129 Nvme0n1 : 1.00 15397.00 60.14 0.00 0.00 0.00 0.00 0.00 00:29:14.129 =================================================================================================================== 00:29:14.129 Total : 15397.00 60.14 0.00 0.00 0.00 0.00 0.00 00:29:14.129 00:29:15.064 16:42:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:15.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.064 Nvme0n1 : 2.00 15198.50 59.37 0.00 0.00 0.00 0.00 0.00 00:29:15.064 =================================================================================================================== 00:29:15.064 Total : 15198.50 59.37 0.00 0.00 0.00 0.00 0.00 00:29:15.064 00:29:15.323 true 00:29:15.323 16:42:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:15.323 16:42:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:15.582 16:42:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:15.582 16:42:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:15.582 16:42:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2777884 00:29:16.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.149 Nvme0n1 : 3.00 15020.33 58.67 0.00 0.00 0.00 0.00 0.00 00:29:16.149 =================================================================================================================== 00:29:16.149 Total : 15020.33 58.67 0.00 0.00 0.00 0.00 0.00 00:29:16.149 00:29:17.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.133 Nvme0n1 : 4.00 14751.25 57.62 0.00 0.00 0.00 0.00 0.00 00:29:17.133 =================================================================================================================== 00:29:17.133 Total : 14751.25 57.62 0.00 0.00 0.00 0.00 0.00 00:29:17.133 00:29:18.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.067 Nvme0n1 : 5.00 14807.40 57.84 0.00 0.00 0.00 0.00 0.00 00:29:18.067 =================================================================================================================== 00:29:18.067 Total : 14807.40 57.84 0.00 0.00 0.00 0.00 0.00 00:29:18.067 00:29:19.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.441 Nvme0n1 : 6.00 14783.50 57.75 0.00 0.00 0.00 0.00 0.00 00:29:19.441 =================================================================================================================== 00:29:19.441 Total : 14783.50 57.75 0.00 0.00 0.00 0.00 0.00 00:29:19.441 00:29:20.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.375 Nvme0n1 : 7.00 14759.57 57.65 0.00 0.00 0.00 0.00 0.00 00:29:20.375 =================================================================================================================== 00:29:20.375 Total : 14759.57 57.65 0.00 0.00 0.00 0.00 0.00 00:29:20.375 00:29:21.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.308 Nvme0n1 : 8.00 14657.62 57.26 0.00 0.00 0.00 0.00 0.00 00:29:21.308 =================================================================================================================== 00:29:21.308 Total : 14657.62 57.26 0.00 0.00 0.00 0.00 0.00 00:29:21.308 00:29:22.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.241 Nvme0n1 : 9.00 14607.67 57.06 0.00 0.00 0.00 0.00 0.00 00:29:22.241 =================================================================================================================== 00:29:22.241 Total : 14607.67 57.06 0.00 0.00 0.00 0.00 0.00 00:29:22.241 00:29:23.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.174 Nvme0n1 : 10.00 14658.90 57.26 0.00 0.00 0.00 0.00 0.00 00:29:23.174 =================================================================================================================== 00:29:23.174 Total : 14658.90 57.26 0.00 0.00 0.00 0.00 0.00 00:29:23.174 00:29:23.174 00:29:23.174 Latency(us) 00:29:23.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.174 Nvme0n1 : 10.01 14658.25 57.26 0.00 0.00 8725.05 6553.60 17282.09 00:29:23.174 =================================================================================================================== 00:29:23.174 Total : 14658.25 57.26 0.00 0.00 8725.05 6553.60 17282.09 00:29:23.174 0 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2777746 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2777746 ']' 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2777746 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2777746 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2777746' 00:29:23.174 killing process with pid 2777746 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2777746 00:29:23.174 Received shutdown signal, test time was about 10.000000 seconds 00:29:23.174 00:29:23.174 Latency(us) 00:29:23.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.174 =================================================================================================================== 00:29:23.174 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.174 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2777746 00:29:23.432 16:42:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.690 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.948 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:23.948 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2775257 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2775257 00:29:24.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2775257 Killed "${NVMF_APP[@]}" "$@" 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:24.206 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2779209 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2779209 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2779209 ']' 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:24.207 16:42:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:24.465 [2024-07-22 16:42:43.873216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:24.465 [2024-07-22 16:42:43.873284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.465 [2024-07-22 16:42:43.948980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.465 [2024-07-22 16:42:44.038004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.465 [2024-07-22 16:42:44.038065] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.465 [2024-07-22 16:42:44.038081] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.465 [2024-07-22 16:42:44.038096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.465 [2024-07-22 16:42:44.038108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.465 [2024-07-22 16:42:44.038139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:24.722 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.723 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:24.981 [2024-07-22 16:42:44.460796] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:24.981 [2024-07-22 16:42:44.460928] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:24.981 [2024-07-22 16:42:44.460993] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:24.981 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:25.239 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a7b5807c-035b-4fc5-81d5-2f6aae1b6859 -t 2000 00:29:25.497 [ 00:29:25.497 { 00:29:25.497 "name": "a7b5807c-035b-4fc5-81d5-2f6aae1b6859", 00:29:25.497 "aliases": [ 00:29:25.497 "lvs/lvol" 00:29:25.497 ], 00:29:25.497 "product_name": "Logical Volume", 00:29:25.497 "block_size": 4096, 00:29:25.497 "num_blocks": 38912, 00:29:25.497 "uuid": "a7b5807c-035b-4fc5-81d5-2f6aae1b6859", 00:29:25.497 "assigned_rate_limits": { 00:29:25.497 "rw_ios_per_sec": 0, 00:29:25.497 "rw_mbytes_per_sec": 0, 00:29:25.497 "r_mbytes_per_sec": 0, 00:29:25.497 "w_mbytes_per_sec": 0 00:29:25.497 }, 00:29:25.497 "claimed": false, 00:29:25.497 "zoned": false, 00:29:25.497 "supported_io_types": { 00:29:25.497 "read": true, 00:29:25.497 "write": true, 00:29:25.497 "unmap": true, 00:29:25.497 "write_zeroes": true, 00:29:25.497 "flush": false, 00:29:25.497 "reset": true, 00:29:25.497 "compare": false, 00:29:25.497 "compare_and_write": false, 00:29:25.497 "abort": false, 00:29:25.497 "nvme_admin": false, 00:29:25.497 "nvme_io": false 00:29:25.497 }, 00:29:25.497 "driver_specific": { 00:29:25.497 "lvol": { 00:29:25.497 "lvol_store_uuid": "7065f1b8-b834-44f2-8c8e-a1ee46f83d10", 00:29:25.497 "base_bdev": "aio_bdev", 00:29:25.497 "thin_provision": false, 00:29:25.497 "num_allocated_clusters": 38, 00:29:25.497 "snapshot": false, 00:29:25.497 "clone": false, 00:29:25.497 "esnap_clone": false 00:29:25.497 } 00:29:25.497 } 00:29:25.497 } 00:29:25.497 ] 00:29:25.497 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:29:25.497 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:25.497 16:42:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:25.754 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:25.754 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:25.754 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:26.012 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:26.012 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:26.269 [2024-07-22 16:42:45.701676] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:26.269 16:42:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:26.527 request: 00:29:26.527 { 00:29:26.527 "uuid": "7065f1b8-b834-44f2-8c8e-a1ee46f83d10", 00:29:26.527 "method": "bdev_lvol_get_lvstores", 00:29:26.527 "req_id": 1 00:29:26.527 } 00:29:26.527 Got JSON-RPC error response 00:29:26.527 response: 00:29:26.527 { 00:29:26.527 "code": -19, 00:29:26.527 "message": "No such device" 00:29:26.527 } 00:29:26.527 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:29:26.527 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:26.527 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:26.527 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:26.527 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:26.785 aio_bdev 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:29:26.785 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:27.043 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a7b5807c-035b-4fc5-81d5-2f6aae1b6859 -t 2000 00:29:27.301 [ 00:29:27.301 { 00:29:27.301 "name": "a7b5807c-035b-4fc5-81d5-2f6aae1b6859", 00:29:27.301 "aliases": [ 00:29:27.301 "lvs/lvol" 00:29:27.301 ], 00:29:27.301 "product_name": "Logical Volume", 00:29:27.301 "block_size": 4096, 00:29:27.301 "num_blocks": 38912, 00:29:27.301 "uuid": "a7b5807c-035b-4fc5-81d5-2f6aae1b6859", 00:29:27.301 "assigned_rate_limits": { 00:29:27.301 "rw_ios_per_sec": 0, 00:29:27.301 "rw_mbytes_per_sec": 0, 00:29:27.301 "r_mbytes_per_sec": 0, 00:29:27.301 "w_mbytes_per_sec": 0 00:29:27.301 }, 00:29:27.301 "claimed": false, 00:29:27.301 "zoned": false, 00:29:27.301 "supported_io_types": { 00:29:27.301 "read": true, 00:29:27.301 "write": true, 00:29:27.301 "unmap": true, 00:29:27.301 "write_zeroes": true, 00:29:27.301 "flush": false, 00:29:27.301 "reset": true, 00:29:27.301 "compare": false, 00:29:27.301 "compare_and_write": false, 00:29:27.301 "abort": false, 00:29:27.301 "nvme_admin": false, 00:29:27.301 "nvme_io": false 00:29:27.301 }, 00:29:27.301 "driver_specific": { 00:29:27.301 "lvol": { 00:29:27.301 "lvol_store_uuid": "7065f1b8-b834-44f2-8c8e-a1ee46f83d10", 00:29:27.301 "base_bdev": "aio_bdev", 00:29:27.301 "thin_provision": false, 00:29:27.301 "num_allocated_clusters": 38, 00:29:27.301 "snapshot": false, 00:29:27.301 "clone": false, 00:29:27.301 "esnap_clone": false 00:29:27.301 } 00:29:27.301 } 00:29:27.301 } 00:29:27.301 ] 00:29:27.302 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:29:27.302 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:27.302 16:42:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:27.559 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:27.559 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:27.559 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:27.817 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:27.817 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a7b5807c-035b-4fc5-81d5-2f6aae1b6859 00:29:28.074 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7065f1b8-b834-44f2-8c8e-a1ee46f83d10 00:29:28.330 16:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.588 00:29:28.588 real 0m19.287s 00:29:28.588 user 0m48.130s 00:29:28.588 sys 0m5.377s 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:28.588 ************************************ 00:29:28.588 END TEST lvs_grow_dirty 00:29:28.588 ************************************ 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:28.588 nvmf_trace.0 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.588 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.845 rmmod nvme_tcp 00:29:28.845 rmmod nvme_fabrics 00:29:28.845 rmmod nvme_keyring 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2779209 ']' 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2779209 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2779209 ']' 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2779209 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2779209 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2779209' 00:29:28.845 killing process with pid 2779209 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2779209 00:29:28.845 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2779209 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.103 16:42:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.003 16:42:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.003 00:29:31.003 real 0m42.636s 00:29:31.003 user 1m11.159s 00:29:31.003 sys 0m9.477s 00:29:31.003 16:42:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.003 16:42:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:31.003 ************************************ 00:29:31.003 END TEST nvmf_lvs_grow 00:29:31.003 ************************************ 00:29:31.003 16:42:50 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:31.003 16:42:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:31.003 16:42:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.003 16:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.003 ************************************ 00:29:31.003 START TEST nvmf_bdev_io_wait 00:29:31.003 ************************************ 00:29:31.003 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:31.260 * Looking for test storage... 00:29:31.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.260 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:29:31.261 16:42:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:33.787 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:33.787 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:33.787 Found net devices under 0000:82:00.0: cvl_0_0 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:33.787 Found net devices under 0000:82:00.1: cvl_0_1 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:29:33.787 00:29:33.787 --- 10.0.0.2 ping statistics --- 00:29:33.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.787 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:33.787 00:29:33.787 --- 10.0.0.1 ping statistics --- 00:29:33.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.787 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2782021 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2782021 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 2782021 ']' 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:33.787 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.788 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:33.788 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:33.788 [2024-07-22 16:42:53.268707] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:33.788 [2024-07-22 16:42:53.268791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.788 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.788 [2024-07-22 16:42:53.347133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.046 [2024-07-22 16:42:53.439427] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.046 [2024-07-22 16:42:53.439480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.046 [2024-07-22 16:42:53.439496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.046 [2024-07-22 16:42:53.439510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.046 [2024-07-22 16:42:53.439522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.046 [2024-07-22 16:42:53.439601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.046 [2024-07-22 16:42:53.439667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.046 [2024-07-22 16:42:53.439692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.046 [2024-07-22 16:42:53.439695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 [2024-07-22 16:42:53.579541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 Malloc0 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.046 [2024-07-22 16:42:53.639509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2782122 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2782125 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.046 { 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme$subsystem", 00:29:34.046 "trtype": "$TEST_TRANSPORT", 00:29:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "$NVMF_PORT", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.046 "hdgst": ${hdgst:-false}, 00:29:34.046 "ddgst": ${ddgst:-false} 00:29:34.046 }, 00:29:34.046 "method": "bdev_nvme_attach_controller" 00:29:34.046 } 00:29:34.046 EOF 00:29:34.046 )") 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2782128 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.046 { 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme$subsystem", 00:29:34.046 "trtype": "$TEST_TRANSPORT", 00:29:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "$NVMF_PORT", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.046 "hdgst": ${hdgst:-false}, 00:29:34.046 "ddgst": ${ddgst:-false} 00:29:34.046 }, 00:29:34.046 "method": "bdev_nvme_attach_controller" 00:29:34.046 } 00:29:34.046 EOF 00:29:34.046 )") 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2782132 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.046 { 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme$subsystem", 00:29:34.046 "trtype": "$TEST_TRANSPORT", 00:29:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "$NVMF_PORT", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.046 "hdgst": ${hdgst:-false}, 00:29:34.046 "ddgst": ${ddgst:-false} 00:29:34.046 }, 00:29:34.046 "method": "bdev_nvme_attach_controller" 00:29:34.046 } 00:29:34.046 EOF 00:29:34.046 )") 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.046 { 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme$subsystem", 00:29:34.046 "trtype": "$TEST_TRANSPORT", 00:29:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "$NVMF_PORT", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.046 "hdgst": ${hdgst:-false}, 00:29:34.046 "ddgst": ${ddgst:-false} 00:29:34.046 }, 00:29:34.046 "method": "bdev_nvme_attach_controller" 00:29:34.046 } 00:29:34.046 EOF 00:29:34.046 )") 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2782122 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme1", 00:29:34.046 "trtype": "tcp", 00:29:34.046 "traddr": "10.0.0.2", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "4420", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.046 "hdgst": false, 00:29:34.046 "ddgst": false 00:29:34.046 }, 00:29:34.046 "method": "bdev_nvme_attach_controller" 00:29:34.046 }' 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:34.046 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.046 "params": { 00:29:34.046 "name": "Nvme1", 00:29:34.046 "trtype": "tcp", 00:29:34.046 "traddr": "10.0.0.2", 00:29:34.046 "adrfam": "ipv4", 00:29:34.046 "trsvcid": "4420", 00:29:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.046 "hdgst": false, 00:29:34.046 "ddgst": false 00:29:34.047 }, 00:29:34.047 "method": "bdev_nvme_attach_controller" 00:29:34.047 }' 00:29:34.047 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:34.047 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.047 "params": { 00:29:34.047 "name": "Nvme1", 00:29:34.047 "trtype": "tcp", 00:29:34.047 "traddr": "10.0.0.2", 00:29:34.047 "adrfam": "ipv4", 00:29:34.047 "trsvcid": "4420", 00:29:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.047 "hdgst": false, 00:29:34.047 "ddgst": false 00:29:34.047 }, 00:29:34.047 "method": "bdev_nvme_attach_controller" 00:29:34.047 }' 00:29:34.047 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:34.047 16:42:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.047 "params": { 00:29:34.047 "name": "Nvme1", 00:29:34.047 "trtype": "tcp", 00:29:34.047 "traddr": "10.0.0.2", 00:29:34.047 "adrfam": "ipv4", 00:29:34.047 "trsvcid": "4420", 00:29:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.047 "hdgst": false, 00:29:34.047 "ddgst": false 00:29:34.047 }, 00:29:34.047 "method": "bdev_nvme_attach_controller" 00:29:34.047 }' 00:29:34.047 [2024-07-22 16:42:53.686128] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:34.047 [2024-07-22 16:42:53.686131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:34.047 [2024-07-22 16:42:53.686165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:34.047 [2024-07-22 16:42:53.686165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:34.047 [2024-07-22 16:42:53.686223] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 16:42:53.686223] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:34.047 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:34.047 [2024-07-22 16:42:53.686242] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 16:42:53.686243] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:34.047 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:34.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.305 [2024-07-22 16:42:53.874942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.305 [2024-07-22 16:42:53.950979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.562 [2024-07-22 16:42:53.977469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.562 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.562 [2024-07-22 16:42:54.052114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.562 [2024-07-22 16:42:54.056742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:34.562 [2024-07-22 16:42:54.119634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:34.562 [2024-07-22 16:42:54.127865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.562 [2024-07-22 16:42:54.198506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:34.820 Running I/O for 1 seconds... 00:29:34.820 Running I/O for 1 seconds... 00:29:34.820 Running I/O for 1 seconds... 00:29:35.077 Running I/O for 1 seconds... 00:29:36.009 00:29:36.009 Latency(us) 00:29:36.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.009 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:36.009 Nvme1n1 : 1.02 6540.15 25.55 0.00 0.00 19364.24 8398.32 27573.67 00:29:36.009 =================================================================================================================== 00:29:36.009 Total : 6540.15 25.55 0.00 0.00 19364.24 8398.32 27573.67 00:29:36.009 00:29:36.009 Latency(us) 00:29:36.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.009 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:36.009 Nvme1n1 : 1.01 9839.90 38.44 0.00 0.00 12952.62 7427.41 24855.13 00:29:36.009 =================================================================================================================== 00:29:36.009 Total : 9839.90 38.44 0.00 0.00 12952.62 7427.41 24855.13 00:29:36.009 00:29:36.009 Latency(us) 00:29:36.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.009 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:36.009 Nvme1n1 : 1.01 6564.83 25.64 0.00 0.00 19429.05 6699.24 43302.31 00:29:36.009 =================================================================================================================== 00:29:36.009 Total : 6564.83 25.64 0.00 0.00 19429.05 6699.24 43302.31 00:29:36.009 00:29:36.009 Latency(us) 00:29:36.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.009 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:36.009 Nvme1n1 : 1.00 199593.53 779.66 0.00 0.00 638.56 257.90 761.55 00:29:36.009 =================================================================================================================== 00:29:36.009 Total : 199593.53 779.66 0.00 0.00 638.56 257.90 761.55 00:29:36.009 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2782125 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2782128 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2782132 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.267 rmmod nvme_tcp 00:29:36.267 rmmod nvme_fabrics 00:29:36.267 rmmod nvme_keyring 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2782021 ']' 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2782021 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 2782021 ']' 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 2782021 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2782021 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2782021' 00:29:36.267 killing process with pid 2782021 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 2782021 00:29:36.267 16:42:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 2782021 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:36.526 16:42:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.058 16:42:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.058 00:29:39.058 real 0m7.505s 00:29:39.058 user 0m16.820s 00:29:39.058 sys 0m3.711s 00:29:39.058 16:42:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:39.058 16:42:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:39.058 ************************************ 00:29:39.058 END TEST nvmf_bdev_io_wait 00:29:39.058 ************************************ 00:29:39.058 16:42:58 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:39.058 16:42:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:39.058 16:42:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:39.058 16:42:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.058 ************************************ 00:29:39.058 START TEST nvmf_queue_depth 00:29:39.058 ************************************ 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:39.058 * Looking for test storage... 00:29:39.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.058 16:42:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.059 16:42:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:41.591 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:41.591 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:41.591 Found net devices under 0000:82:00.0: cvl_0_0 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.591 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:41.592 Found net devices under 0000:82:00.1: cvl_0_1 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:29:41.592 00:29:41.592 --- 10.0.0.2 ping statistics --- 00:29:41.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.592 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:29:41.592 00:29:41.592 --- 10.0.0.1 ping statistics --- 00:29:41.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.592 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2784684 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2784684 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2784684 ']' 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:41.592 16:43:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.592 [2024-07-22 16:43:01.016020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:41.592 [2024-07-22 16:43:01.016108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.592 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.592 [2024-07-22 16:43:01.088568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.592 [2024-07-22 16:43:01.171170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.592 [2024-07-22 16:43:01.171221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.592 [2024-07-22 16:43:01.171249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.592 [2024-07-22 16:43:01.171261] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.592 [2024-07-22 16:43:01.171271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.592 [2024-07-22 16:43:01.171312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 [2024-07-22 16:43:01.303090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 Malloc0 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 [2024-07-22 16:43:01.360555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2784708 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2784708 /var/tmp/bdevperf.sock 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2784708 ']' 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:41.851 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.851 [2024-07-22 16:43:01.408828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:41.851 [2024-07-22 16:43:01.408903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784708 ] 00:29:41.851 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.851 [2024-07-22 16:43:01.487152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.110 [2024-07-22 16:43:01.579640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.110 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:42.110 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:29:42.110 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.110 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.110 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:42.368 NVMe0n1 00:29:42.368 16:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.368 16:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.368 Running I/O for 10 seconds... 00:29:54.566 00:29:54.566 Latency(us) 00:29:54.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.566 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:54.566 Verification LBA range: start 0x0 length 0x4000 00:29:54.566 NVMe0n1 : 10.10 8606.73 33.62 0.00 0.00 118488.97 24369.68 73788.68 00:29:54.566 =================================================================================================================== 00:29:54.566 Total : 8606.73 33.62 0.00 0.00 118488.97 24369.68 73788.68 00:29:54.566 0 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2784708 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2784708 ']' 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2784708 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2784708 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2784708' 00:29:54.566 killing process with pid 2784708 00:29:54.566 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2784708 00:29:54.566 Received shutdown signal, test time was about 10.000000 seconds 00:29:54.566 00:29:54.567 Latency(us) 00:29:54.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.567 =================================================================================================================== 00:29:54.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2784708 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.567 rmmod nvme_tcp 00:29:54.567 rmmod nvme_fabrics 00:29:54.567 rmmod nvme_keyring 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2784684 ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2784684 ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2784684' 00:29:54.567 killing process with pid 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2784684 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.567 16:43:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.133 16:43:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.133 00:29:55.133 real 0m16.530s 00:29:55.133 user 0m22.487s 00:29:55.133 sys 0m3.599s 00:29:55.133 16:43:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:55.133 16:43:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:55.133 ************************************ 00:29:55.133 END TEST nvmf_queue_depth 00:29:55.133 ************************************ 00:29:55.133 16:43:14 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:29:55.133 16:43:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:55.133 16:43:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:55.133 16:43:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.133 ************************************ 00:29:55.133 START TEST nvmf_target_multipath 00:29:55.133 ************************************ 00:29:55.133 16:43:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:29:55.392 * Looking for test storage... 00:29:55.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:29:55.392 16:43:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:57.926 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:57.926 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:57.926 Found net devices under 0000:82:00.0: cvl_0_0 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.926 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:57.927 Found net devices under 0000:82:00.1: cvl_0_1 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:29:57.927 00:29:57.927 --- 10.0.0.2 ping statistics --- 00:29:57.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.927 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:29:57.927 00:29:57.927 --- 10.0.0.1 ping statistics --- 00:29:57.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.927 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:57.927 only one NIC for nvmf test 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.927 rmmod nvme_tcp 00:29:57.927 rmmod nvme_fabrics 00:29:57.927 rmmod nvme_keyring 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.927 16:43:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.458 00:30:00.458 real 0m4.743s 00:30:00.458 user 0m0.965s 00:30:00.458 sys 0m1.786s 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:00.458 16:43:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:00.458 ************************************ 00:30:00.458 END TEST nvmf_target_multipath 00:30:00.458 ************************************ 00:30:00.458 16:43:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:00.458 16:43:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:00.458 16:43:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:00.458 16:43:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.458 ************************************ 00:30:00.458 START TEST nvmf_zcopy 00:30:00.458 ************************************ 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:00.458 * Looking for test storage... 00:30:00.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.458 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.459 16:43:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.358 16:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:02.358 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:02.358 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.358 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.359 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:02.617 Found net devices under 0000:82:00.0: cvl_0_0 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:02.617 Found net devices under 0000:82:00.1: cvl_0_1 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:30:02.617 00:30:02.617 --- 10.0.0.2 ping statistics --- 00:30:02.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.617 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:02.617 00:30:02.617 --- 10.0.0.1 ping statistics --- 00:30:02.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.617 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2790570 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2790570 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 2790570 ']' 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:02.617 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:02.618 [2024-07-22 16:43:22.231161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:02.618 [2024-07-22 16:43:22.231237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.876 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.876 [2024-07-22 16:43:22.306135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.876 [2024-07-22 16:43:22.389024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.876 [2024-07-22 16:43:22.389090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.876 [2024-07-22 16:43:22.389111] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.876 [2024-07-22 16:43:22.389128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.876 [2024-07-22 16:43:22.389144] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.876 [2024-07-22 16:43:22.389179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.876 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:02.876 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:30:02.876 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.876 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.876 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 [2024-07-22 16:43:22.535211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 [2024-07-22 16:43:22.551420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 malloc0 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.134 { 00:30:03.134 "params": { 00:30:03.134 "name": "Nvme$subsystem", 00:30:03.134 "trtype": "$TEST_TRANSPORT", 00:30:03.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.134 "adrfam": "ipv4", 00:30:03.134 "trsvcid": "$NVMF_PORT", 00:30:03.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.134 "hdgst": ${hdgst:-false}, 00:30:03.134 "ddgst": ${ddgst:-false} 00:30:03.134 }, 00:30:03.134 "method": "bdev_nvme_attach_controller" 00:30:03.134 } 00:30:03.134 EOF 00:30:03.134 )") 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:30:03.134 16:43:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:03.134 "params": { 00:30:03.134 "name": "Nvme1", 00:30:03.134 "trtype": "tcp", 00:30:03.134 "traddr": "10.0.0.2", 00:30:03.134 "adrfam": "ipv4", 00:30:03.134 "trsvcid": "4420", 00:30:03.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.134 "hdgst": false, 00:30:03.134 "ddgst": false 00:30:03.134 }, 00:30:03.134 "method": "bdev_nvme_attach_controller" 00:30:03.134 }' 00:30:03.134 [2024-07-22 16:43:22.631164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:03.134 [2024-07-22 16:43:22.631252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790599 ] 00:30:03.134 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.134 [2024-07-22 16:43:22.698279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.391 [2024-07-22 16:43:22.794512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.648 Running I/O for 10 seconds... 00:30:13.615 00:30:13.615 Latency(us) 00:30:13.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.615 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:13.615 Verification LBA range: start 0x0 length 0x1000 00:30:13.615 Nvme1n1 : 10.01 5819.95 45.47 0.00 0.00 21933.47 3786.52 33399.09 00:30:13.615 =================================================================================================================== 00:30:13.615 Total : 5819.95 45.47 0.00 0.00 21933.47 3786.52 33399.09 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2791888 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.874 { 00:30:13.874 "params": { 00:30:13.874 "name": "Nvme$subsystem", 00:30:13.874 "trtype": "$TEST_TRANSPORT", 00:30:13.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.874 "adrfam": "ipv4", 00:30:13.874 "trsvcid": "$NVMF_PORT", 00:30:13.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.874 "hdgst": ${hdgst:-false}, 00:30:13.874 "ddgst": ${ddgst:-false} 00:30:13.874 }, 00:30:13.874 "method": "bdev_nvme_attach_controller" 00:30:13.874 } 00:30:13.874 EOF 00:30:13.874 )") 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:30:13.874 [2024-07-22 16:43:33.412745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.412791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:30:13.874 16:43:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.874 "params": { 00:30:13.874 "name": "Nvme1", 00:30:13.874 "trtype": "tcp", 00:30:13.874 "traddr": "10.0.0.2", 00:30:13.874 "adrfam": "ipv4", 00:30:13.874 "trsvcid": "4420", 00:30:13.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:13.874 "hdgst": false, 00:30:13.874 "ddgst": false 00:30:13.874 }, 00:30:13.874 "method": "bdev_nvme_attach_controller" 00:30:13.874 }' 00:30:13.874 [2024-07-22 16:43:33.420705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.420736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.428719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.428745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.436735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.436758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.444754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.444776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.449119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:13.874 [2024-07-22 16:43:33.449184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791888 ] 00:30:13.874 [2024-07-22 16:43:33.452774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.452805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.460794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.460817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.468815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.468838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.476836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.476857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.874 [2024-07-22 16:43:33.484859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.484881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.492897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.492924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.500919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.500946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.508943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.508978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.516974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:13.874 [2024-07-22 16:43:33.517014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:13.874 [2024-07-22 16:43:33.522431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.133 [2024-07-22 16:43:33.525011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.525036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.533067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.533105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.541070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.541099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.549063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.549085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.557082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.557104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.565097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.565119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.573123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.573146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.581173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.581207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.589170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.589194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.597203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.597241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.133 [2024-07-22 16:43:33.605209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.133 [2024-07-22 16:43:33.605231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.613231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.613266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.619362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.134 [2024-07-22 16:43:33.621271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.621294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.629288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.629314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.637349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.637381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.645366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.645404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.653393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.653432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.661409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.661448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.669435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.669474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.677460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.677501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.685480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.685519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.693472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.693498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.701517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.701553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.709546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.709583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.717554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.717584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.725566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.725592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.733596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.733625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.741597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.741622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.749619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.749642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.757641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.757665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.765663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.765687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.773682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.773705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.134 [2024-07-22 16:43:33.781712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.134 [2024-07-22 16:43:33.781736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.789726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.789750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.797746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.797768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.805772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.805795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.813794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.813818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.821813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.821837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.829836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.829859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.837864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.837888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 Running I/O for 5 seconds... 00:30:14.393 [2024-07-22 16:43:33.845882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.845904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.857855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.857882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.867384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.867410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.878490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.878516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.889469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.889496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.900466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.900492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.911223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.911265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.921662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.921688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.932434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.932461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.942943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.942994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.952987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.953028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.963809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.963835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.974358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.974384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.986755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.986781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:33.996288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:33.996328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:34.007453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:34.007478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:34.017920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:34.017961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:34.028527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:34.028552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.393 [2024-07-22 16:43:34.041342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.393 [2024-07-22 16:43:34.041370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.051262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.051290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.062339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.062365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.072222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.072263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.083848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.083873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.094554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.094580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.105052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.105079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.115819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.115845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.126050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.126079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.136722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.136748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.147115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.147143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.159732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.159758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.169291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.169332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.180455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.180481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.191834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.191860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.202751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.202777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.213676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.213701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.224805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.224831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.235298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.235339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.244842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.244873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.256322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.256348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.266229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.266276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.277312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.277338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.287897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.287923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.652 [2024-07-22 16:43:34.298467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.652 [2024-07-22 16:43:34.298494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.309366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.309393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.319799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.319832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.332671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.332696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.342225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.342267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.353230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.353272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.364144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.364173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.374981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.375022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.387232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.387274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.397402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.397428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.408332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.408364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.421837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.421863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.433460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.433493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.445360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.445393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.458057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.458084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.470119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.470146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.481404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.481436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.911 [2024-07-22 16:43:34.493238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.911 [2024-07-22 16:43:34.493281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.912 [2024-07-22 16:43:34.505263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.912 [2024-07-22 16:43:34.505289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.912 [2024-07-22 16:43:34.517150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.912 [2024-07-22 16:43:34.517183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.912 [2024-07-22 16:43:34.529092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.912 [2024-07-22 16:43:34.529119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.912 [2024-07-22 16:43:34.540760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.912 [2024-07-22 16:43:34.540800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:14.912 [2024-07-22 16:43:34.552352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:14.912 [2024-07-22 16:43:34.552384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.564255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.564299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.578192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.578219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.589441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.589473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.600596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.600628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.613026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.613053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.625059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.625088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.636911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.636942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.648406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.648438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.659720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.659751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.671383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.671414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.682898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.682929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.695144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.695170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.707060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.707087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.718655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.718686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.730356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.730387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.742028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.742054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.753983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.754026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.765689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.765728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.777368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.777400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.788858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.788888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.800102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.800129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.171 [2024-07-22 16:43:34.812025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.171 [2024-07-22 16:43:34.812051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.430 [2024-07-22 16:43:34.823896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.430 [2024-07-22 16:43:34.823927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.430 [2024-07-22 16:43:34.835730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.430 [2024-07-22 16:43:34.835761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.847946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.847989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.859749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.859779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.871549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.871580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.882327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.882358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.894624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.894655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.906207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.906234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.917856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.917887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.930010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.930036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.942013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.942039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.953682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.953714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.967597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.967629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.978260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.978293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:34.990128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:34.990162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.001950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.001991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.015476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.015507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.026817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.026849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.038922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.038953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.050905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.050936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.062420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.062452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.431 [2024-07-22 16:43:35.073651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.431 [2024-07-22 16:43:35.073683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.085397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.085429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.097015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.097042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.109220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.109262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.120475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.120507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.131760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.131792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.143821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.143852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.155410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.155442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.167251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.167277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.178824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.178856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.190571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.190603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.202846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.202877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.214707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.214738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.226847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.226879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.238570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.238602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.250323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.250354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.261994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.262034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.273844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.273875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.285228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.285272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.298171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.298202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.308542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.308573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.321042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.321069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.690 [2024-07-22 16:43:35.332448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.690 [2024-07-22 16:43:35.332479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.344485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.344516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.356129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.356157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.368328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.368360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.380110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.380138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.391794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.391825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.403602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.403634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.415299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.415341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.428624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.428656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.439201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.439230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.449769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.449795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.460494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.460527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.470675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.470700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.481424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.481450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.494422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.494447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.504171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.504199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.514632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.514657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.949 [2024-07-22 16:43:35.525092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.949 [2024-07-22 16:43:35.525119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.535621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.535647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.546391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.546417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.557074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.557102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.567589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.567616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.578118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.578146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:15.950 [2024-07-22 16:43:35.590636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:15.950 [2024-07-22 16:43:35.590663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.601192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.601220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.611808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.611834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.622316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.622342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.632790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.632816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.643664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.643690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.656126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.656154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.666553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.666578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.677073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.677101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.689732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.689758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.700178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.700207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.710775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.710811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.723415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.723441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.733944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.733996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.745031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.745059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.757255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.757280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.766987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.767037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.778307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.778333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.790536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.790562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.799939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.799991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.811416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.811441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.821971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.821998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.832976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.833002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.844045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.844073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.209 [2024-07-22 16:43:35.855377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.209 [2024-07-22 16:43:35.855404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.866393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.866419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.876939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.876989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.887155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.887183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.897550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.897576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.908358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.908384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.919034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.919061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.929649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.929675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.942123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.942150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.951499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.951524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.962857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.962882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.975490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.975515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.985462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.985488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:35.996694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:35.996720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.007272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.007299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.017873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.017899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.028628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.028653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.039416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.039443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.050319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.050353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.467 [2024-07-22 16:43:36.061185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.467 [2024-07-22 16:43:36.061214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.468 [2024-07-22 16:43:36.073166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.468 [2024-07-22 16:43:36.073195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.468 [2024-07-22 16:43:36.082597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.468 [2024-07-22 16:43:36.082623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.468 [2024-07-22 16:43:36.093515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.468 [2024-07-22 16:43:36.093547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.468 [2024-07-22 16:43:36.105683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.468 [2024-07-22 16:43:36.105715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.468 [2024-07-22 16:43:36.117489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.468 [2024-07-22 16:43:36.117520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.129144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.129171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.140479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.140511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.151931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.151962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.163389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.163420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.174916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.174947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.186643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.186674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.198106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.198133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.210101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.210129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.221756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.221788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.233193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.233220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.244884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.244916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.256475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.256507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.268311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.268351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.280207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.280233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.291840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.291872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.303078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.303105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.314269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.314295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.326169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.326195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.338268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.338294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.349822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.349852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.362007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.362032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.726 [2024-07-22 16:43:36.373821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.726 [2024-07-22 16:43:36.373852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.384913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.384939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.396568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.396600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.408332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.408364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.419982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.420027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.431858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.431889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.443712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.443745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.455352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.455384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.467358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.467390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.479288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.479333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.491083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.491117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.502608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.502639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.514637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.514668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.526170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.526197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.538413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.538445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.550600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.550631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.561711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.561742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.573281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.573321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.585477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.585509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.596584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.596615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.609119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.609146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.621141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.621167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.985 [2024-07-22 16:43:36.633051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.985 [2024-07-22 16:43:36.633093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.645254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.645293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.656877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.656908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.668701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.668732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.680045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.680073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.692268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.692294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.704165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.704192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.716366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.716408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.727876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.727907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.739338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.739370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.751430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.751462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.763150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.763178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.775060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.775087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.786590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.786621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.798428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.798460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.810073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.810100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.821766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.821798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.833562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.833593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.845935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.845977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.857632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.857664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.869165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.869192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.881147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.881173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.245 [2024-07-22 16:43:36.893034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.245 [2024-07-22 16:43:36.893061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.904357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.904389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.916121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.916148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.927670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.927701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.939725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.939756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.951780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.951811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.963560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.963591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.975642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.975673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.987315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.987357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:36.999284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:36.999308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.011199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.011226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.022838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.022870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.034717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.034748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.046265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.046292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.057809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.057840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.069700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.069730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.081240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.081272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.092624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.092655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.104394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.104425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.115056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.115084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.125437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.125463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.135690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.135716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.505 [2024-07-22 16:43:37.145876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.505 [2024-07-22 16:43:37.145903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.156640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.156666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.167504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.167529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.178564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.178590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.189469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.189495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.199899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.199924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.210615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.210640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.221092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.221119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.233362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.233388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.242877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.242903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.253991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.254018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.264438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.264463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.274994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.275022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.287398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.287426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.296899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.296925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.308032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.308058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.318058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.318086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.328269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.328297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.338220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.338262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.764 [2024-07-22 16:43:37.348679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.764 [2024-07-22 16:43:37.348704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.359146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.359174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.369473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.369499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.380045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.380072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.391044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.391071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.403166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.403195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.765 [2024-07-22 16:43:37.413279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.765 [2024-07-22 16:43:37.413307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.023 [2024-07-22 16:43:37.424372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.023 [2024-07-22 16:43:37.424398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.023 [2024-07-22 16:43:37.434612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.023 [2024-07-22 16:43:37.434638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.445183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.445211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.455599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.455624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.466299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.466340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.477901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.477926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.486884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.486909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.498149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.498177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.508680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.508706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.519424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.519450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.530449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.530474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.541346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.541372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.552122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.552149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.562793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.562818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.575033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.575060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.584730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.584756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.595984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.596025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.606624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.606650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.617265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.617290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.627211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.627238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.637878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.637903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.648369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.648395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.659184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.659212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.024 [2024-07-22 16:43:37.669825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.024 [2024-07-22 16:43:37.669867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.681020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.681048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.693375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.693400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.703521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.703546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.714816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.714842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.725432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.725457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.735863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.735888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.746161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.746190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.756998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.757033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.767693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.767719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.778057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.778085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.790288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.790331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.802422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.802454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.814097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.814123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.826153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.826181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.837839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.837870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.849735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.849767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.860921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.860953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.874303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.874335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.885233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.885290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.897139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.897167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.908788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.908820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.922110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.922138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.284 [2024-07-22 16:43:37.932717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.284 [2024-07-22 16:43:37.932749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:37.945543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:37.945583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:37.957240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:37.957282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:37.968790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:37.968822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:37.980533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:37.980572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:37.991923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:37.991954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.003780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.003812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.015640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.015671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.027285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.027327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.039549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.039581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.051140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.051168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.062771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.062803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.074149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.074177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.085934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.085976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.099320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.099373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.110383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.110415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.122920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.122951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.134620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.134652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.146259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.146286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.157695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.157727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.169234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.169275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.181282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.181324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.543 [2024-07-22 16:43:38.193237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.543 [2024-07-22 16:43:38.193282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.802 [2024-07-22 16:43:38.205140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.802 [2024-07-22 16:43:38.205173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.802 [2024-07-22 16:43:38.216780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.802 [2024-07-22 16:43:38.216811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.802 [2024-07-22 16:43:38.228493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.802 [2024-07-22 16:43:38.228524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.802 [2024-07-22 16:43:38.239333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.239364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.251862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.251893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.263688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.263720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.275255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.275281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.287045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.287072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.298763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.298794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.312299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.312342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.323005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.323050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.335210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.335238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.347221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.347262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.363273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.363300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.373877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.373907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.386006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.386056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.397357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.397389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.409336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.409368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.421275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.421308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.432618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.432657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.803 [2024-07-22 16:43:38.444262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.803 [2024-07-22 16:43:38.444290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.456743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.456775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.468875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.468907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.479718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.479749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.490981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.491027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.502654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.502686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.515210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.515237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.527744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.527775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.539372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.539403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.551024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.551050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.562919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.562951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.575068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.575095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.587271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.587297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.598824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.598855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.610709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.610740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.622949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.622989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.634620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.634651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.646515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.646547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.658307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.658352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.670618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.670649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.682724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.682756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.694513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.694544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.062 [2024-07-22 16:43:38.707757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.062 [2024-07-22 16:43:38.707788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.718948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.718989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.730996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.731040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.742884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.742915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.754751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.754783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.766477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.766509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.778168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.778195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.789771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.789802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.801980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.802024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.813469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.813500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.825039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.825066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.836747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.836778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.848040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.848066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.859882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.859913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.868701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.868731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 00:30:19.321 Latency(us) 00:30:19.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.321 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:19.321 Nvme1n1 : 5.01 11237.87 87.80 0.00 0.00 11374.84 4660.34 19126.80 00:30:19.321 =================================================================================================================== 00:30:19.321 Total : 11237.87 87.80 0.00 0.00 11374.84 4660.34 19126.80 00:30:19.321 [2024-07-22 16:43:38.874619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.874648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.882638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.882667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.890688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.890727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.898736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.898788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.906753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.906801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.914775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.914825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.922795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.922843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.930821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.930871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.938843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.938889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.946866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.946917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.954888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.954940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.962911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.962962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.321 [2024-07-22 16:43:38.970931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.321 [2024-07-22 16:43:38.970991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:38.978954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:38.979012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:38.986979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:38.987028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:38.994983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:38.995040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.002980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.003023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.011041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.011086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.019066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.019116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.027089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.027130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.035071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.035096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.043117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.043159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.051148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.051197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.059151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.059187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.067139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.067164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 [2024-07-22 16:43:39.075159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.580 [2024-07-22 16:43:39.075184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2791888) - No such process 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2791888 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.580 delay0 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.580 16:43:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:19.580 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.580 [2024-07-22 16:43:39.197006] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:26.135 Initializing NVMe Controllers 00:30:26.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:26.135 Initialization complete. Launching workers. 00:30:26.135 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 293, failed: 6652 00:30:26.135 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6881, failed to submit 64 00:30:26.135 success 6772, unsuccess 109, failed 0 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.135 rmmod nvme_tcp 00:30:26.135 rmmod nvme_fabrics 00:30:26.135 rmmod nvme_keyring 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2790570 ']' 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2790570 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 2790570 ']' 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 2790570 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2790570 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2790570' 00:30:26.135 killing process with pid 2790570 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 2790570 00:30:26.135 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 2790570 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.393 16:43:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.292 16:43:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.292 00:30:28.292 real 0m28.348s 00:30:28.292 user 0m39.942s 00:30:28.292 sys 0m10.136s 00:30:28.292 16:43:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.292 16:43:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:28.292 ************************************ 00:30:28.292 END TEST nvmf_zcopy 00:30:28.292 ************************************ 00:30:28.292 16:43:47 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:28.292 16:43:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:28.292 16:43:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.292 16:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.551 ************************************ 00:30:28.551 START TEST nvmf_nmic 00:30:28.551 ************************************ 00:30:28.551 16:43:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:28.551 * Looking for test storage... 00:30:28.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.551 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.552 16:43:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:31.178 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:31.178 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:31.178 Found net devices under 0000:82:00.0: cvl_0_0 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:31.178 Found net devices under 0000:82:00.1: cvl_0_1 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.178 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:30:31.179 00:30:31.179 --- 10.0.0.2 ping statistics --- 00:30:31.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.179 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:30:31.179 00:30:31.179 --- 10.0.0.1 ping statistics --- 00:30:31.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.179 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2795579 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2795579 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 2795579 ']' 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:31.179 16:43:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.179 [2024-07-22 16:43:50.778788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:31.179 [2024-07-22 16:43:50.778855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.476 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.476 [2024-07-22 16:43:50.855198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.476 [2024-07-22 16:43:50.941928] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.476 [2024-07-22 16:43:50.942004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.476 [2024-07-22 16:43:50.942018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.476 [2024-07-22 16:43:50.942029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.476 [2024-07-22 16:43:50.942039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.476 [2024-07-22 16:43:50.942090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.476 [2024-07-22 16:43:50.942147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.476 [2024-07-22 16:43:50.942214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.476 [2024-07-22 16:43:50.942217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.476 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.476 [2024-07-22 16:43:51.102844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 Malloc0 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 [2024-07-22 16:43:51.154361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:31.735 test case1: single bdev can't be used in multiple subsystems 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 [2024-07-22 16:43:51.178214] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:31.735 [2024-07-22 16:43:51.178260] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:31.735 [2024-07-22 16:43:51.178275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.735 request: 00:30:31.735 { 00:30:31.735 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:31.735 "namespace": { 00:30:31.735 "bdev_name": "Malloc0", 00:30:31.735 "no_auto_visible": false 00:30:31.735 }, 00:30:31.735 "method": "nvmf_subsystem_add_ns", 00:30:31.735 "req_id": 1 00:30:31.735 } 00:30:31.735 Got JSON-RPC error response 00:30:31.735 response: 00:30:31.735 { 00:30:31.735 "code": -32602, 00:30:31.735 "message": "Invalid parameters" 00:30:31.735 } 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:31.735 Adding namespace failed - expected result. 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:31.735 test case2: host connect to nvmf target in multiple paths 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:31.735 [2024-07-22 16:43:51.186331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.735 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:32.301 16:43:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:33.234 16:43:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:33.234 16:43:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:30:33.234 16:43:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:30:33.234 16:43:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:30:33.234 16:43:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:30:35.133 16:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:35.133 [global] 00:30:35.133 thread=1 00:30:35.133 invalidate=1 00:30:35.133 rw=write 00:30:35.133 time_based=1 00:30:35.133 runtime=1 00:30:35.133 ioengine=libaio 00:30:35.133 direct=1 00:30:35.133 bs=4096 00:30:35.133 iodepth=1 00:30:35.133 norandommap=0 00:30:35.133 numjobs=1 00:30:35.133 00:30:35.133 verify_dump=1 00:30:35.133 verify_backlog=512 00:30:35.133 verify_state_save=0 00:30:35.133 do_verify=1 00:30:35.133 verify=crc32c-intel 00:30:35.133 [job0] 00:30:35.133 filename=/dev/nvme0n1 00:30:35.133 Could not set queue depth (nvme0n1) 00:30:35.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:35.133 fio-3.35 00:30:35.133 Starting 1 thread 00:30:36.505 00:30:36.505 job0: (groupid=0, jobs=1): err= 0: pid=2796101: Mon Jul 22 16:43:55 2024 00:30:36.505 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:30:36.505 slat (nsec): min=8847, max=34220, avg=16353.00, stdev=5978.54 00:30:36.505 clat (usec): min=40793, max=41257, avg=40984.87, stdev=77.51 00:30:36.505 lat (usec): min=40826, max=41270, avg=41001.22, stdev=74.92 00:30:36.505 clat percentiles (usec): 00:30:36.505 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:36.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:36.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:36.505 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:36.505 | 99.99th=[41157] 00:30:36.505 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:30:36.505 slat (nsec): min=8575, max=68508, avg=16182.14, stdev=8414.96 00:30:36.505 clat (usec): min=153, max=722, avg=272.93, stdev=70.58 00:30:36.505 lat (usec): min=163, max=739, avg=289.11, stdev=74.58 00:30:36.505 clat percentiles (usec): 00:30:36.505 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 206], 00:30:36.505 | 30.00th=[ 225], 40.00th=[ 241], 50.00th=[ 265], 60.00th=[ 297], 00:30:36.505 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 396], 00:30:36.505 | 99.00th=[ 449], 99.50th=[ 457], 99.90th=[ 725], 99.95th=[ 725], 00:30:36.505 | 99.99th=[ 725] 00:30:36.505 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:36.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:36.505 lat (usec) : 250=42.21%, 500=53.66%, 750=0.19% 00:30:36.505 lat (msec) : 50=3.94% 00:30:36.505 cpu : usr=0.89%, sys=0.79%, ctx=533, majf=0, minf=2 00:30:36.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.505 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:36.505 00:30:36.505 Run status group 0 (all jobs): 00:30:36.505 READ: bw=83.0KiB/s (85.0kB/s), 83.0KiB/s-83.0KiB/s (85.0kB/s-85.0kB/s), io=84.0KiB (86.0kB), run=1012-1012msec 00:30:36.505 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:30:36.505 00:30:36.505 Disk stats (read/write): 00:30:36.505 nvme0n1: ios=68/512, merge=0/0, ticks=768/136, in_queue=904, util=91.88% 00:30:36.505 16:43:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:36.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.505 rmmod nvme_tcp 00:30:36.505 rmmod nvme_fabrics 00:30:36.505 rmmod nvme_keyring 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2795579 ']' 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2795579 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 2795579 ']' 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 2795579 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2795579 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2795579' 00:30:36.505 killing process with pid 2795579 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 2795579 00:30:36.505 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 2795579 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.072 16:43:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.972 16:43:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.972 00:30:38.972 real 0m10.502s 00:30:38.972 user 0m22.850s 00:30:38.972 sys 0m2.701s 00:30:38.972 16:43:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:38.972 16:43:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.972 ************************************ 00:30:38.972 END TEST nvmf_nmic 00:30:38.972 ************************************ 00:30:38.972 16:43:58 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:30:38.972 16:43:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:38.972 16:43:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:38.972 16:43:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.972 ************************************ 00:30:38.972 START TEST nvmf_fio_target 00:30:38.972 ************************************ 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:30:38.972 * Looking for test storage... 00:30:38.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.972 16:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.500 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:41.501 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:41.501 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:41.501 Found net devices under 0000:82:00.0: cvl_0_0 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:41.501 Found net devices under 0000:82:00.1: cvl_0_1 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.501 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:30:41.760 00:30:41.760 --- 10.0.0.2 ping statistics --- 00:30:41.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.760 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:30:41.760 00:30:41.760 --- 10.0.0.1 ping statistics --- 00:30:41.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.760 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2798578 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2798578 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 2798578 ']' 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:41.760 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.760 [2024-07-22 16:44:01.285920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:41.760 [2024-07-22 16:44:01.286039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.760 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.760 [2024-07-22 16:44:01.361669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.018 [2024-07-22 16:44:01.452484] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.018 [2024-07-22 16:44:01.452548] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.018 [2024-07-22 16:44:01.452561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.018 [2024-07-22 16:44:01.452573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.018 [2024-07-22 16:44:01.452583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.018 [2024-07-22 16:44:01.452652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.018 [2024-07-22 16:44:01.452710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.018 [2024-07-22 16:44:01.452778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.018 [2024-07-22 16:44:01.452780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.018 16:44:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.275 [2024-07-22 16:44:01.866658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.275 16:44:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:42.532 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:42.532 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:42.790 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:42.790 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:43.048 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:43.048 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:43.306 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:43.306 16:44:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:43.562 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:43.819 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:43.819 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:44.077 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:44.077 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:44.336 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:44.336 16:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:44.593 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:44.850 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:44.850 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:45.108 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:45.108 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:45.365 16:44:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.622 [2024-07-22 16:44:05.165314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.622 16:44:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:45.879 16:44:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:46.137 16:44:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:30:46.702 16:44:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:30:48.599 16:44:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:48.856 [global] 00:30:48.856 thread=1 00:30:48.856 invalidate=1 00:30:48.856 rw=write 00:30:48.856 time_based=1 00:30:48.856 runtime=1 00:30:48.856 ioengine=libaio 00:30:48.856 direct=1 00:30:48.856 bs=4096 00:30:48.856 iodepth=1 00:30:48.856 norandommap=0 00:30:48.856 numjobs=1 00:30:48.856 00:30:48.856 verify_dump=1 00:30:48.856 verify_backlog=512 00:30:48.856 verify_state_save=0 00:30:48.856 do_verify=1 00:30:48.856 verify=crc32c-intel 00:30:48.856 [job0] 00:30:48.856 filename=/dev/nvme0n1 00:30:48.856 [job1] 00:30:48.856 filename=/dev/nvme0n2 00:30:48.856 [job2] 00:30:48.856 filename=/dev/nvme0n3 00:30:48.856 [job3] 00:30:48.856 filename=/dev/nvme0n4 00:30:48.856 Could not set queue depth (nvme0n1) 00:30:48.856 Could not set queue depth (nvme0n2) 00:30:48.856 Could not set queue depth (nvme0n3) 00:30:48.856 Could not set queue depth (nvme0n4) 00:30:48.856 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:48.856 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:48.856 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:48.856 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:48.856 fio-3.35 00:30:48.856 Starting 4 threads 00:30:50.229 00:30:50.229 job0: (groupid=0, jobs=1): err= 0: pid=2799536: Mon Jul 22 16:44:09 2024 00:30:50.229 read: IOPS=115, BW=462KiB/s (473kB/s)(472KiB/1022msec) 00:30:50.229 slat (nsec): min=6581, max=79880, avg=12338.01, stdev=10363.64 00:30:50.230 clat (usec): min=316, max=41433, avg=7607.92, stdev=15532.37 00:30:50.230 lat (usec): min=323, max=41446, avg=7620.26, stdev=15533.93 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:30:50.230 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 441], 00:30:50.230 | 70.00th=[ 537], 80.00th=[ 635], 90.00th=[41157], 95.00th=[41157], 00:30:50.230 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:50.230 | 99.99th=[41681] 00:30:50.230 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:30:50.230 slat (nsec): min=9049, max=40042, avg=11304.64, stdev=3171.00 00:30:50.230 clat (usec): min=149, max=424, avg=224.34, stdev=33.08 00:30:50.230 lat (usec): min=158, max=436, avg=235.65, stdev=33.26 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:30:50.230 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:30:50.230 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 277], 00:30:50.230 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 424], 99.95th=[ 424], 00:30:50.230 | 99.99th=[ 424] 00:30:50.230 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:30:50.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:50.230 lat (usec) : 250=71.59%, 500=22.06%, 750=3.02% 00:30:50.230 lat (msec) : 50=3.33% 00:30:50.230 cpu : usr=0.29%, sys=0.78%, ctx=632, majf=0, minf=1 00:30:50.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 issued rwts: total=118,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:50.230 job1: (groupid=0, jobs=1): err= 0: pid=2799546: Mon Jul 22 16:44:09 2024 00:30:50.230 read: IOPS=1187, BW=4748KiB/s (4862kB/s)(4848KiB/1021msec) 00:30:50.230 slat (usec): min=6, max=103, avg=12.97, stdev= 6.34 00:30:50.230 clat (usec): min=245, max=40616, avg=463.59, stdev=1419.37 00:30:50.230 lat (usec): min=252, max=40627, avg=476.56, stdev=1419.38 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 318], 00:30:50.230 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 383], 00:30:50.230 | 70.00th=[ 408], 80.00th=[ 437], 90.00th=[ 486], 95.00th=[ 537], 00:30:50.230 | 99.00th=[ 668], 99.50th=[ 2638], 99.90th=[21627], 99.95th=[40633], 00:30:50.230 | 99.99th=[40633] 00:30:50.230 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:30:50.230 slat (usec): min=8, max=205, avg=17.84, stdev=11.70 00:30:50.230 clat (usec): min=145, max=551, avg=262.07, stdev=81.15 00:30:50.230 lat (usec): min=154, max=575, avg=279.91, stdev=89.70 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:30:50.230 | 30.00th=[ 198], 40.00th=[ 210], 50.00th=[ 237], 60.00th=[ 273], 00:30:50.230 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 383], 95.00th=[ 424], 00:30:50.230 | 99.00th=[ 449], 99.50th=[ 457], 99.90th=[ 494], 99.95th=[ 553], 00:30:50.230 | 99.99th=[ 553] 00:30:50.230 bw ( KiB/s): min= 4096, max= 8192, per=38.33%, avg=6144.00, stdev=2896.31, samples=2 00:30:50.230 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:30:50.230 lat (usec) : 250=30.02%, 500=66.23%, 750=3.46%, 1000=0.04% 00:30:50.230 lat (msec) : 4=0.04%, 10=0.11%, 20=0.04%, 50=0.07% 00:30:50.230 cpu : usr=3.43%, sys=5.20%, ctx=2751, majf=0, minf=1 00:30:50.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 issued rwts: total=1212,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:50.230 job2: (groupid=0, jobs=1): err= 0: pid=2799578: Mon Jul 22 16:44:09 2024 00:30:50.230 read: IOPS=881, BW=3526KiB/s (3611kB/s)(3600KiB/1021msec) 00:30:50.230 slat (nsec): min=6109, max=63057, avg=11043.13, stdev=6272.84 00:30:50.230 clat (usec): min=226, max=41058, avg=810.06, stdev=4208.46 00:30:50.230 lat (usec): min=233, max=41074, avg=821.10, stdev=4209.73 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 277], 00:30:50.230 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 367], 00:30:50.230 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 486], 95.00th=[ 537], 00:30:50.230 | 99.00th=[34341], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:50.230 | 99.99th=[41157] 00:30:50.230 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:30:50.230 slat (nsec): min=8238, max=72004, avg=15665.38, stdev=8533.01 00:30:50.230 clat (usec): min=169, max=478, avg=251.29, stdev=55.73 00:30:50.230 lat (usec): min=178, max=501, avg=266.95, stdev=61.31 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 204], 00:30:50.230 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 255], 00:30:50.230 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 355], 00:30:50.230 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 474], 99.95th=[ 478], 00:30:50.230 | 99.99th=[ 478] 00:30:50.230 bw ( KiB/s): min= 8192, max= 8192, per=51.10%, avg=8192.00, stdev= 0.00, samples=1 00:30:50.230 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:50.230 lat (usec) : 250=33.42%, 500=63.10%, 750=2.70%, 1000=0.05% 00:30:50.230 lat (msec) : 2=0.16%, 10=0.05%, 50=0.52% 00:30:50.230 cpu : usr=1.67%, sys=3.53%, ctx=1927, majf=0, minf=2 00:30:50.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 issued rwts: total=900,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:50.230 job3: (groupid=0, jobs=1): err= 0: pid=2799591: Mon Jul 22 16:44:09 2024 00:30:50.230 read: IOPS=545, BW=2184KiB/s (2236kB/s)(2232KiB/1022msec) 00:30:50.230 slat (nsec): min=6506, max=73818, avg=13079.75, stdev=6513.83 00:30:50.230 clat (usec): min=252, max=41079, avg=1337.13, stdev=6114.16 00:30:50.230 lat (usec): min=259, max=41114, avg=1350.21, stdev=6115.90 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 318], 00:30:50.230 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 424], 00:30:50.230 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 562], 00:30:50.230 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:50.230 | 99.99th=[41157] 00:30:50.230 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:30:50.230 slat (usec): min=8, max=145, avg=14.19, stdev=15.19 00:30:50.230 clat (usec): min=160, max=1004, avg=240.57, stdev=56.95 00:30:50.230 lat (usec): min=170, max=1018, avg=254.77, stdev=64.14 00:30:50.230 clat percentiles (usec): 00:30:50.230 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 196], 20.00th=[ 204], 00:30:50.230 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:30:50.230 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 351], 00:30:50.230 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 523], 99.95th=[ 1004], 00:30:50.230 | 99.99th=[ 1004] 00:30:50.230 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=2 00:30:50.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:30:50.230 lat (usec) : 250=47.79%, 500=48.93%, 750=2.34%, 1000=0.06% 00:30:50.230 lat (msec) : 2=0.06%, 50=0.82% 00:30:50.230 cpu : usr=1.08%, sys=2.64%, ctx=1584, majf=0, minf=1 00:30:50.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.230 issued rwts: total=558,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:50.230 00:30:50.230 Run status group 0 (all jobs): 00:30:50.230 READ: bw=10.7MiB/s (11.2MB/s), 462KiB/s-4748KiB/s (473kB/s-4862kB/s), io=10.9MiB (11.4MB), run=1021-1022msec 00:30:50.230 WRITE: bw=15.7MiB/s (16.4MB/s), 2004KiB/s-6018KiB/s (2052kB/s-6162kB/s), io=16.0MiB (16.8MB), run=1021-1022msec 00:30:50.230 00:30:50.230 Disk stats (read/write): 00:30:50.230 nvme0n1: ios=163/512, merge=0/0, ticks=719/107, in_queue=826, util=86.17% 00:30:50.230 nvme0n2: ios=1047/1536, merge=0/0, ticks=1277/367, in_queue=1644, util=88.70% 00:30:50.230 nvme0n3: ios=648/1024, merge=0/0, ticks=1013/249, in_queue=1262, util=92.96% 00:30:50.230 nvme0n4: ios=535/608, merge=0/0, ticks=1582/127, in_queue=1709, util=94.17% 00:30:50.230 16:44:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:50.230 [global] 00:30:50.230 thread=1 00:30:50.230 invalidate=1 00:30:50.230 rw=randwrite 00:30:50.230 time_based=1 00:30:50.230 runtime=1 00:30:50.230 ioengine=libaio 00:30:50.230 direct=1 00:30:50.230 bs=4096 00:30:50.230 iodepth=1 00:30:50.230 norandommap=0 00:30:50.230 numjobs=1 00:30:50.230 00:30:50.230 verify_dump=1 00:30:50.230 verify_backlog=512 00:30:50.230 verify_state_save=0 00:30:50.230 do_verify=1 00:30:50.230 verify=crc32c-intel 00:30:50.230 [job0] 00:30:50.230 filename=/dev/nvme0n1 00:30:50.230 [job1] 00:30:50.230 filename=/dev/nvme0n2 00:30:50.230 [job2] 00:30:50.230 filename=/dev/nvme0n3 00:30:50.230 [job3] 00:30:50.230 filename=/dev/nvme0n4 00:30:50.230 Could not set queue depth (nvme0n1) 00:30:50.230 Could not set queue depth (nvme0n2) 00:30:50.230 Could not set queue depth (nvme0n3) 00:30:50.230 Could not set queue depth (nvme0n4) 00:30:50.490 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:50.490 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:50.490 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:50.490 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:50.490 fio-3.35 00:30:50.490 Starting 4 threads 00:30:51.863 00:30:51.863 job0: (groupid=0, jobs=1): err= 0: pid=2799883: Mon Jul 22 16:44:11 2024 00:30:51.863 read: IOPS=1006, BW=4027KiB/s (4124kB/s)(4132KiB/1026msec) 00:30:51.863 slat (nsec): min=5724, max=50935, avg=10728.80, stdev=5518.95 00:30:51.863 clat (usec): min=204, max=41158, avg=649.89, stdev=3786.65 00:30:51.863 lat (usec): min=210, max=41173, avg=660.62, stdev=3787.45 00:30:51.863 clat percentiles (usec): 00:30:51.863 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 243], 00:30:51.863 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 310], 00:30:51.863 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 388], 00:30:51.863 | 99.00th=[ 433], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:51.863 | 99.99th=[41157] 00:30:51.863 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:30:51.863 slat (nsec): min=7452, max=41826, avg=10377.82, stdev=5005.86 00:30:51.863 clat (usec): min=136, max=1162, avg=207.64, stdev=60.11 00:30:51.863 lat (usec): min=144, max=1170, avg=218.02, stdev=61.85 00:30:51.863 clat percentiles (usec): 00:30:51.863 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 165], 00:30:51.863 | 30.00th=[ 180], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 210], 00:30:51.863 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 297], 00:30:51.863 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 930], 99.95th=[ 1156], 00:30:51.863 | 99.99th=[ 1156] 00:30:51.863 bw ( KiB/s): min= 4096, max= 8175, per=49.17%, avg=6135.50, stdev=2884.29, samples=2 00:30:51.863 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:30:51.863 lat (usec) : 250=65.04%, 500=34.45%, 750=0.04%, 1000=0.08% 00:30:51.864 lat (msec) : 2=0.04%, 50=0.35% 00:30:51.864 cpu : usr=1.76%, sys=3.80%, ctx=2569, majf=0, minf=1 00:30:51.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:51.864 job1: (groupid=0, jobs=1): err= 0: pid=2799884: Mon Jul 22 16:44:11 2024 00:30:51.864 read: IOPS=23, BW=95.9KiB/s (98.2kB/s)(96.0KiB/1001msec) 00:30:51.864 slat (nsec): min=7688, max=33669, avg=16649.29, stdev=7287.54 00:30:51.864 clat (usec): min=465, max=41729, avg=36386.76, stdev=11858.05 00:30:51.864 lat (usec): min=479, max=41737, avg=36403.41, stdev=11859.83 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[ 465], 5.00th=[ 478], 10.00th=[24249], 20.00th=[40633], 00:30:51.864 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:51.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:51.864 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:51.864 | 99.99th=[41681] 00:30:51.864 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:30:51.864 slat (nsec): min=7863, max=33167, avg=9414.64, stdev=2851.59 00:30:51.864 clat (usec): min=165, max=392, avg=235.02, stdev=43.75 00:30:51.864 lat (usec): min=174, max=402, avg=244.43, stdev=44.26 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:30:51.864 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:30:51.864 | 70.00th=[ 243], 80.00th=[ 260], 90.00th=[ 306], 95.00th=[ 334], 00:30:51.864 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 392], 99.95th=[ 392], 00:30:51.864 | 99.99th=[ 392] 00:30:51.864 bw ( KiB/s): min= 4087, max= 4087, per=32.76%, avg=4087.00, stdev= 0.00, samples=1 00:30:51.864 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:30:51.864 lat (usec) : 250=74.81%, 500=21.08% 00:30:51.864 lat (msec) : 50=4.10% 00:30:51.864 cpu : usr=0.40%, sys=0.60%, ctx=536, majf=0, minf=2 00:30:51.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:51.864 job2: (groupid=0, jobs=1): err= 0: pid=2799885: Mon Jul 22 16:44:11 2024 00:30:51.864 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:30:51.864 slat (nsec): min=5291, max=69533, avg=20710.92, stdev=12060.96 00:30:51.864 clat (usec): min=242, max=41400, avg=1501.52, stdev=6631.80 00:30:51.864 lat (usec): min=250, max=41407, avg=1522.23, stdev=6631.20 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 285], 00:30:51.864 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 367], 00:30:51.864 | 70.00th=[ 457], 80.00th=[ 578], 90.00th=[ 603], 95.00th=[ 627], 00:30:51.864 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:51.864 | 99.99th=[41157] 00:30:51.864 write: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec); 0 zone resets 00:30:51.864 slat (nsec): min=6386, max=78134, avg=15933.47, stdev=14405.01 00:30:51.864 clat (usec): min=164, max=545, avg=300.28, stdev=93.36 00:30:51.864 lat (usec): min=170, max=586, avg=316.22, stdev=105.46 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[ 188], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 237], 00:30:51.864 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 258], 00:30:51.864 | 70.00th=[ 314], 80.00th=[ 412], 90.00th=[ 461], 95.00th=[ 469], 00:30:51.864 | 99.00th=[ 529], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 545], 00:30:51.864 | 99.99th=[ 545] 00:30:51.864 bw ( KiB/s): min= 4087, max= 4087, per=32.76%, avg=4087.00, stdev= 0.00, samples=1 00:30:51.864 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:30:51.864 lat (usec) : 250=31.10%, 500=55.91%, 750=11.82% 00:30:51.864 lat (msec) : 50=1.17% 00:30:51.864 cpu : usr=1.40%, sys=1.90%, ctx=1195, majf=0, minf=1 00:30:51.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 issued rwts: total=512,681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:51.864 job3: (groupid=0, jobs=1): err= 0: pid=2799886: Mon Jul 22 16:44:11 2024 00:30:51.864 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:30:51.864 slat (nsec): min=8757, max=41685, avg=14874.68, stdev=6880.32 00:30:51.864 clat (usec): min=40833, max=41981, avg=41090.77, stdev=290.37 00:30:51.864 lat (usec): min=40847, max=41992, avg=41105.65, stdev=290.25 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:51.864 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:51.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:30:51.864 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:51.864 | 99.99th=[42206] 00:30:51.864 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:30:51.864 slat (nsec): min=8347, max=55154, avg=10819.81, stdev=3140.43 00:30:51.864 clat (usec): min=168, max=567, avg=247.86, stdev=42.00 00:30:51.864 lat (usec): min=178, max=578, avg=258.68, stdev=42.60 00:30:51.864 clat percentiles (usec): 00:30:51.864 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:30:51.864 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:30:51.864 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 318], 00:30:51.864 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 570], 99.95th=[ 570], 00:30:51.864 | 99.99th=[ 570] 00:30:51.864 bw ( KiB/s): min= 4087, max= 4087, per=32.76%, avg=4087.00, stdev= 0.00, samples=1 00:30:51.864 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:30:51.864 lat (usec) : 250=62.92%, 500=32.77%, 750=0.19% 00:30:51.864 lat (msec) : 50=4.12% 00:30:51.864 cpu : usr=0.29%, sys=0.77%, ctx=535, majf=0, minf=1 00:30:51.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.864 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:51.864 00:30:51.864 Run status group 0 (all jobs): 00:30:51.864 READ: bw=6125KiB/s (6272kB/s), 84.7KiB/s-4027KiB/s (86.7kB/s-4124kB/s), io=6364KiB (6517kB), run=1001-1039msec 00:30:51.864 WRITE: bw=12.2MiB/s (12.8MB/s), 1971KiB/s-5988KiB/s (2018kB/s-6132kB/s), io=12.7MiB (13.3MB), run=1001-1039msec 00:30:51.864 00:30:51.864 Disk stats (read/write): 00:30:51.864 nvme0n1: ios=1078/1536, merge=0/0, ticks=484/296, in_queue=780, util=87.17% 00:30:51.864 nvme0n2: ios=70/512, merge=0/0, ticks=775/112, in_queue=887, util=91.16% 00:30:51.864 nvme0n3: ios=454/512, merge=0/0, ticks=1061/131, in_queue=1192, util=98.23% 00:30:51.864 nvme0n4: ios=75/512, merge=0/0, ticks=850/119, in_queue=969, util=98.11% 00:30:51.864 16:44:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:51.864 [global] 00:30:51.864 thread=1 00:30:51.864 invalidate=1 00:30:51.864 rw=write 00:30:51.864 time_based=1 00:30:51.864 runtime=1 00:30:51.864 ioengine=libaio 00:30:51.864 direct=1 00:30:51.864 bs=4096 00:30:51.864 iodepth=128 00:30:51.864 norandommap=0 00:30:51.864 numjobs=1 00:30:51.864 00:30:51.864 verify_dump=1 00:30:51.864 verify_backlog=512 00:30:51.864 verify_state_save=0 00:30:51.864 do_verify=1 00:30:51.864 verify=crc32c-intel 00:30:51.864 [job0] 00:30:51.864 filename=/dev/nvme0n1 00:30:51.864 [job1] 00:30:51.864 filename=/dev/nvme0n2 00:30:51.864 [job2] 00:30:51.864 filename=/dev/nvme0n3 00:30:51.864 [job3] 00:30:51.864 filename=/dev/nvme0n4 00:30:51.864 Could not set queue depth (nvme0n1) 00:30:51.864 Could not set queue depth (nvme0n2) 00:30:51.864 Could not set queue depth (nvme0n3) 00:30:51.864 Could not set queue depth (nvme0n4) 00:30:51.864 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:51.864 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:51.864 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:51.864 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:51.864 fio-3.35 00:30:51.864 Starting 4 threads 00:30:53.240 00:30:53.240 job0: (groupid=0, jobs=1): err= 0: pid=2800111: Mon Jul 22 16:44:12 2024 00:30:53.240 read: IOPS=5335, BW=20.8MiB/s (21.9MB/s)(20.9MiB/1003msec) 00:30:53.240 slat (usec): min=3, max=6136, avg=84.15, stdev=452.80 00:30:53.240 clat (usec): min=888, max=22678, avg=11157.87, stdev=1864.04 00:30:53.240 lat (usec): min=4777, max=22702, avg=11242.01, stdev=1886.96 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10290], 00:30:53.240 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:30:53.240 | 70.00th=[11469], 80.00th=[12256], 90.00th=[12911], 95.00th=[14091], 00:30:53.240 | 99.00th=[19530], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:30:53.240 | 99.99th=[22676] 00:30:53.240 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:30:53.240 slat (usec): min=4, max=23428, avg=86.65, stdev=555.95 00:30:53.240 clat (usec): min=5276, max=37702, avg=11905.21, stdev=3900.45 00:30:53.240 lat (usec): min=5287, max=40603, avg=11991.86, stdev=3916.00 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:30:53.240 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:30:53.240 | 70.00th=[11600], 80.00th=[12256], 90.00th=[14877], 95.00th=[15795], 00:30:53.240 | 99.00th=[34341], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:30:53.240 | 99.99th=[37487] 00:30:53.240 bw ( KiB/s): min=22176, max=22880, per=33.58%, avg=22528.00, stdev=497.80, samples=2 00:30:53.240 iops : min= 5544, max= 5720, avg=5632.00, stdev=124.45, samples=2 00:30:53.240 lat (usec) : 1000=0.01% 00:30:53.240 lat (msec) : 10=13.62%, 20=84.94%, 50=1.43% 00:30:53.240 cpu : usr=6.19%, sys=12.28%, ctx=443, majf=0, minf=13 00:30:53.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:53.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.240 issued rwts: total=5352,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.240 job1: (groupid=0, jobs=1): err= 0: pid=2800112: Mon Jul 22 16:44:12 2024 00:30:53.240 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:30:53.240 slat (usec): min=3, max=34005, avg=172.56, stdev=1280.54 00:30:53.240 clat (usec): min=923, max=68031, avg=22784.45, stdev=10834.43 00:30:53.240 lat (usec): min=932, max=68046, avg=22957.01, stdev=10926.34 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 7635], 5.00th=[10028], 10.00th=[10814], 20.00th=[11469], 00:30:53.240 | 30.00th=[11994], 40.00th=[15926], 50.00th=[24511], 60.00th=[27132], 00:30:53.240 | 70.00th=[29754], 80.00th=[32113], 90.00th=[35914], 95.00th=[40109], 00:30:53.240 | 99.00th=[46400], 99.50th=[47973], 99.90th=[48497], 99.95th=[63701], 00:30:53.240 | 99.99th=[67634] 00:30:53.240 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1006msec); 0 zone resets 00:30:53.240 slat (usec): min=4, max=14498, avg=142.25, stdev=802.10 00:30:53.240 clat (usec): min=1328, max=47930, avg=18644.94, stdev=8459.63 00:30:53.240 lat (usec): min=1340, max=47940, avg=18787.18, stdev=8506.75 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 7504], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11207], 00:30:53.240 | 30.00th=[11600], 40.00th=[14091], 50.00th=[18482], 60.00th=[19268], 00:30:53.240 | 70.00th=[22938], 80.00th=[24773], 90.00th=[26346], 95.00th=[32900], 00:30:53.240 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:30:53.240 | 99.99th=[47973] 00:30:53.240 bw ( KiB/s): min=10424, max=14152, per=18.32%, avg=12288.00, stdev=2636.09, samples=2 00:30:53.240 iops : min= 2606, max= 3538, avg=3072.00, stdev=659.02, samples=2 00:30:53.240 lat (usec) : 1000=0.03% 00:30:53.240 lat (msec) : 2=0.34%, 10=6.51%, 20=46.84%, 50=46.23%, 100=0.05% 00:30:53.240 cpu : usr=3.58%, sys=6.57%, ctx=295, majf=0, minf=13 00:30:53.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:53.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.240 issued rwts: total=3072,3087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.240 job2: (groupid=0, jobs=1): err= 0: pid=2800115: Mon Jul 22 16:44:12 2024 00:30:53.240 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:30:53.240 slat (usec): min=2, max=9529, avg=124.89, stdev=688.84 00:30:53.240 clat (usec): min=4856, max=34909, avg=16939.23, stdev=5633.89 00:30:53.240 lat (usec): min=4859, max=34926, avg=17064.12, stdev=5692.51 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 6521], 5.00th=[10028], 10.00th=[11207], 20.00th=[11994], 00:30:53.240 | 30.00th=[13042], 40.00th=[13960], 50.00th=[15401], 60.00th=[18220], 00:30:53.240 | 70.00th=[19792], 80.00th=[20841], 90.00th=[25560], 95.00th=[28705], 00:30:53.240 | 99.00th=[30802], 99.50th=[30802], 99.90th=[33162], 99.95th=[34341], 00:30:53.240 | 99.99th=[34866] 00:30:53.240 write: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1006msec); 0 zone resets 00:30:53.241 slat (usec): min=3, max=13966, avg=114.75, stdev=697.45 00:30:53.241 clat (usec): min=472, max=45855, avg=16496.30, stdev=7073.36 00:30:53.241 lat (usec): min=516, max=45864, avg=16611.05, stdev=7124.49 00:30:53.241 clat percentiles (usec): 00:30:53.241 | 1.00th=[ 3949], 5.00th=[ 5997], 10.00th=[ 8356], 20.00th=[11731], 00:30:53.241 | 30.00th=[12649], 40.00th=[13435], 50.00th=[15664], 60.00th=[17433], 00:30:53.241 | 70.00th=[19268], 80.00th=[20841], 90.00th=[24773], 95.00th=[30278], 00:30:53.241 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:30:53.241 | 99.99th=[45876] 00:30:53.241 bw ( KiB/s): min=11560, max=19872, per=23.43%, avg=15716.00, stdev=5877.47, samples=2 00:30:53.241 iops : min= 2890, max= 4968, avg=3929.00, stdev=1469.37, samples=2 00:30:53.241 lat (usec) : 500=0.01%, 1000=0.03% 00:30:53.241 lat (msec) : 2=0.24%, 4=0.42%, 10=9.03%, 20=65.59%, 50=24.69% 00:30:53.241 cpu : usr=3.28%, sys=8.56%, ctx=344, majf=0, minf=13 00:30:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.241 issued rwts: total=3584,4056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.241 job3: (groupid=0, jobs=1): err= 0: pid=2800116: Mon Jul 22 16:44:12 2024 00:30:53.241 read: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(15.1MiB/1002msec) 00:30:53.241 slat (usec): min=2, max=25123, avg=122.80, stdev=849.20 00:30:53.241 clat (usec): min=872, max=55054, avg=15182.77, stdev=6684.41 00:30:53.241 lat (usec): min=4695, max=56107, avg=15305.56, stdev=6742.15 00:30:53.241 clat percentiles (usec): 00:30:53.241 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11600], 00:30:53.241 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:30:53.241 | 70.00th=[15533], 80.00th=[17171], 90.00th=[20055], 95.00th=[24249], 00:30:53.241 | 99.00th=[50594], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:30:53.241 | 99.99th=[55313] 00:30:53.241 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:30:53.241 slat (usec): min=3, max=25631, avg=114.95, stdev=756.36 00:30:53.241 clat (usec): min=3446, max=50602, avg=16477.77, stdev=8661.31 00:30:53.241 lat (usec): min=3453, max=50609, avg=16592.72, stdev=8702.31 00:30:53.241 clat percentiles (usec): 00:30:53.241 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[11469], 20.00th=[12387], 00:30:53.241 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:30:53.241 | 70.00th=[14222], 80.00th=[19530], 90.00th=[26870], 95.00th=[38536], 00:30:53.241 | 99.00th=[48497], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:30:53.241 | 99.99th=[50594] 00:30:53.241 bw ( KiB/s): min=14136, max=14136, per=21.07%, avg=14136.00, stdev= 0.00, samples=1 00:30:53.241 iops : min= 3534, max= 3534, avg=3534.00, stdev= 0.00, samples=1 00:30:53.241 lat (usec) : 1000=0.01% 00:30:53.241 lat (msec) : 4=0.08%, 10=6.13%, 20=78.51%, 50=14.64%, 100=0.64% 00:30:53.241 cpu : usr=4.40%, sys=10.59%, ctx=425, majf=0, minf=11 00:30:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.241 issued rwts: total=3878,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.241 00:30:53.241 Run status group 0 (all jobs): 00:30:53.241 READ: bw=61.7MiB/s (64.7MB/s), 11.9MiB/s-20.8MiB/s (12.5MB/s-21.9MB/s), io=62.1MiB (65.1MB), run=1002-1006msec 00:30:53.241 WRITE: bw=65.5MiB/s (68.7MB/s), 12.0MiB/s-21.9MiB/s (12.6MB/s-23.0MB/s), io=65.9MiB (69.1MB), run=1002-1006msec 00:30:53.241 00:30:53.241 Disk stats (read/write): 00:30:53.241 nvme0n1: ios=4652/4934, merge=0/0, ticks=24834/24760, in_queue=49594, util=96.69% 00:30:53.241 nvme0n2: ios=2073/2393, merge=0/0, ticks=30359/27806, in_queue=58165, util=96.44% 00:30:53.241 nvme0n3: ios=3168/3584, merge=0/0, ticks=26369/27836, in_queue=54205, util=96.54% 00:30:53.241 nvme0n4: ios=3123/3584, merge=0/0, ticks=29990/37112, in_queue=67102, util=96.41% 00:30:53.241 16:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:53.241 [global] 00:30:53.241 thread=1 00:30:53.241 invalidate=1 00:30:53.241 rw=randwrite 00:30:53.241 time_based=1 00:30:53.241 runtime=1 00:30:53.241 ioengine=libaio 00:30:53.241 direct=1 00:30:53.241 bs=4096 00:30:53.241 iodepth=128 00:30:53.241 norandommap=0 00:30:53.241 numjobs=1 00:30:53.241 00:30:53.241 verify_dump=1 00:30:53.241 verify_backlog=512 00:30:53.241 verify_state_save=0 00:30:53.241 do_verify=1 00:30:53.241 verify=crc32c-intel 00:30:53.241 [job0] 00:30:53.241 filename=/dev/nvme0n1 00:30:53.241 [job1] 00:30:53.241 filename=/dev/nvme0n2 00:30:53.241 [job2] 00:30:53.241 filename=/dev/nvme0n3 00:30:53.241 [job3] 00:30:53.241 filename=/dev/nvme0n4 00:30:53.241 Could not set queue depth (nvme0n1) 00:30:53.241 Could not set queue depth (nvme0n2) 00:30:53.241 Could not set queue depth (nvme0n3) 00:30:53.241 Could not set queue depth (nvme0n4) 00:30:53.498 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:53.498 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:53.498 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:53.498 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:53.498 fio-3.35 00:30:53.498 Starting 4 threads 00:30:54.433 00:30:54.433 job0: (groupid=0, jobs=1): err= 0: pid=2800342: Mon Jul 22 16:44:14 2024 00:30:54.433 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:30:54.433 slat (usec): min=3, max=25308, avg=209.21, stdev=1246.20 00:30:54.433 clat (msec): min=5, max=110, avg=28.27, stdev=23.90 00:30:54.433 lat (msec): min=5, max=110, avg=28.48, stdev=24.04 00:30:54.433 clat percentiles (msec): 00:30:54.433 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:30:54.433 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 24], 00:30:54.433 | 70.00th=[ 24], 80.00th=[ 33], 90.00th=[ 73], 95.00th=[ 84], 00:30:54.433 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:30:54.433 | 99.99th=[ 111] 00:30:54.433 write: IOPS=2639, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1004msec); 0 zone resets 00:30:54.433 slat (usec): min=3, max=25989, avg=167.43, stdev=1259.73 00:30:54.433 clat (usec): min=437, max=82426, avg=20642.41, stdev=17128.41 00:30:54.433 lat (usec): min=1924, max=84256, avg=20809.85, stdev=17219.63 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 3949], 5.00th=[ 6390], 10.00th=[ 9110], 20.00th=[10421], 00:30:54.433 | 30.00th=[11207], 40.00th=[11600], 50.00th=[15270], 60.00th=[16712], 00:30:54.433 | 70.00th=[18220], 80.00th=[27919], 90.00th=[47449], 95.00th=[59507], 00:30:54.433 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:30:54.433 | 99.99th=[82314] 00:30:54.433 bw ( KiB/s): min= 4656, max=15792, per=15.62%, avg=10224.00, stdev=7874.34, samples=2 00:30:54.433 iops : min= 1164, max= 3948, avg=2556.00, stdev=1968.59, samples=2 00:30:54.433 lat (usec) : 500=0.02%, 1000=0.02% 00:30:54.433 lat (msec) : 4=0.52%, 10=12.46%, 20=44.61%, 50=30.46%, 100=11.23% 00:30:54.433 lat (msec) : 250=0.69% 00:30:54.433 cpu : usr=2.49%, sys=3.79%, ctx=222, majf=0, minf=1 00:30:54.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:54.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:54.433 issued rwts: total=2560,2650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:54.433 job1: (groupid=0, jobs=1): err= 0: pid=2800343: Mon Jul 22 16:44:14 2024 00:30:54.433 read: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec) 00:30:54.433 slat (usec): min=3, max=6296, avg=90.43, stdev=469.84 00:30:54.433 clat (usec): min=543, max=25815, avg=11832.73, stdev=2731.89 00:30:54.433 lat (usec): min=3625, max=25831, avg=11923.16, stdev=2752.18 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:30:54.433 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:30:54.433 | 70.00th=[11863], 80.00th=[12256], 90.00th=[15008], 95.00th=[19268], 00:30:54.433 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[23462], 00:30:54.433 | 99.99th=[25822] 00:30:54.433 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:30:54.433 slat (usec): min=4, max=27191, avg=97.90, stdev=692.04 00:30:54.433 clat (usec): min=6114, max=74230, avg=12869.27, stdev=7229.61 00:30:54.433 lat (usec): min=6122, max=74251, avg=12967.17, stdev=7286.01 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10552], 00:30:54.433 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:30:54.433 | 70.00th=[11469], 80.00th=[12256], 90.00th=[16712], 95.00th=[22152], 00:30:54.433 | 99.00th=[53740], 99.50th=[58983], 99.90th=[58983], 99.95th=[59507], 00:30:54.433 | 99.99th=[73925] 00:30:54.433 bw ( KiB/s): min=18243, max=22680, per=31.27%, avg=20461.50, stdev=3137.43, samples=2 00:30:54.433 iops : min= 4560, max= 5670, avg=5115.00, stdev=784.89, samples=2 00:30:54.433 lat (usec) : 750=0.01% 00:30:54.433 lat (msec) : 4=0.30%, 10=7.48%, 20=87.63%, 50=3.74%, 100=0.84% 00:30:54.433 cpu : usr=5.49%, sys=7.88%, ctx=456, majf=0, minf=1 00:30:54.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:54.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:54.433 issued rwts: total=5070,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:54.433 job2: (groupid=0, jobs=1): err= 0: pid=2800346: Mon Jul 22 16:44:14 2024 00:30:54.433 read: IOPS=3488, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec) 00:30:54.433 slat (usec): min=2, max=16796, avg=148.89, stdev=1045.78 00:30:54.433 clat (usec): min=776, max=48920, avg=18322.41, stdev=7778.44 00:30:54.433 lat (usec): min=6724, max=48925, avg=18471.30, stdev=7833.57 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[12387], 20.00th=[13566], 00:30:54.433 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15008], 60.00th=[15926], 00:30:54.433 | 70.00th=[17957], 80.00th=[23987], 90.00th=[30278], 95.00th=[36439], 00:30:54.433 | 99.00th=[41681], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:30:54.433 | 99.99th=[49021] 00:30:54.433 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:30:54.433 slat (usec): min=4, max=13462, avg=126.33, stdev=716.09 00:30:54.433 clat (usec): min=2719, max=41431, avg=17634.61, stdev=6217.90 00:30:54.433 lat (usec): min=2741, max=41439, avg=17760.93, stdev=6268.93 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 6063], 5.00th=[ 7832], 10.00th=[10683], 20.00th=[11600], 00:30:54.433 | 30.00th=[13566], 40.00th=[14746], 50.00th=[15664], 60.00th=[20579], 00:30:54.433 | 70.00th=[22676], 80.00th=[23200], 90.00th=[25035], 95.00th=[27657], 00:30:54.433 | 99.00th=[32375], 99.50th=[33162], 99.90th=[34341], 99.95th=[41681], 00:30:54.433 | 99.99th=[41681] 00:30:54.433 bw ( KiB/s): min=13168, max=15504, per=21.91%, avg=14336.00, stdev=1651.80, samples=2 00:30:54.433 iops : min= 3292, max= 3876, avg=3584.00, stdev=412.95, samples=2 00:30:54.433 lat (usec) : 1000=0.01% 00:30:54.433 lat (msec) : 4=0.14%, 10=3.75%, 20=63.24%, 50=32.85% 00:30:54.433 cpu : usr=2.39%, sys=6.28%, ctx=322, majf=0, minf=1 00:30:54.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:30:54.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:54.433 issued rwts: total=3502,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:54.433 job3: (groupid=0, jobs=1): err= 0: pid=2800347: Mon Jul 22 16:44:14 2024 00:30:54.433 read: IOPS=5077, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:30:54.433 slat (usec): min=3, max=11750, avg=105.61, stdev=761.19 00:30:54.433 clat (usec): min=2033, max=26424, avg=13268.32, stdev=3262.98 00:30:54.433 lat (usec): min=4360, max=26429, avg=13373.93, stdev=3312.45 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11731], 00:30:54.433 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:30:54.433 | 70.00th=[13173], 80.00th=[15533], 90.00th=[17957], 95.00th=[20317], 00:30:54.433 | 99.00th=[23462], 99.50th=[23987], 99.90th=[25822], 99.95th=[26346], 00:30:54.433 | 99.99th=[26346] 00:30:54.433 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:30:54.433 slat (usec): min=4, max=10168, avg=82.96, stdev=497.17 00:30:54.433 clat (usec): min=1203, max=26413, avg=11687.01, stdev=2809.75 00:30:54.433 lat (usec): min=1212, max=26421, avg=11769.96, stdev=2839.42 00:30:54.433 clat percentiles (usec): 00:30:54.433 | 1.00th=[ 3458], 5.00th=[ 6194], 10.00th=[ 7242], 20.00th=[ 9896], 00:30:54.433 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:30:54.433 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[15533], 00:30:54.433 | 99.00th=[17695], 99.50th=[18744], 99.90th=[23987], 99.95th=[24249], 00:30:54.433 | 99.99th=[26346] 00:30:54.433 bw ( KiB/s): min=20439, max=20480, per=31.26%, avg=20459.50, stdev=28.99, samples=2 00:30:54.433 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:30:54.433 lat (msec) : 2=0.08%, 4=0.49%, 10=13.67%, 20=82.61%, 50=3.16% 00:30:54.433 cpu : usr=5.07%, sys=7.65%, ctx=524, majf=0, minf=1 00:30:54.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:54.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:54.433 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:54.433 00:30:54.433 Run status group 0 (all jobs): 00:30:54.433 READ: bw=63.0MiB/s (66.1MB/s), 9.96MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=63.5MiB (66.5MB), run=1003-1007msec 00:30:54.433 WRITE: bw=63.9MiB/s (67.0MB/s), 10.3MiB/s-19.9MiB/s (10.8MB/s-20.9MB/s), io=64.4MiB (67.5MB), run=1003-1007msec 00:30:54.433 00:30:54.433 Disk stats (read/write): 00:30:54.433 nvme0n1: ios=2430/2560, merge=0/0, ticks=18922/16578, in_queue=35500, util=97.70% 00:30:54.433 nvme0n2: ios=4123/4399, merge=0/0, ticks=16727/18165, in_queue=34892, util=96.85% 00:30:54.433 nvme0n3: ios=2611/3071, merge=0/0, ticks=38068/51452, in_queue=89520, util=96.76% 00:30:54.433 nvme0n4: ios=4146/4503, merge=0/0, ticks=53156/51155, in_queue=104311, util=96.63% 00:30:54.695 16:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:54.695 16:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2800489 00:30:54.695 16:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:54.695 16:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:54.695 [global] 00:30:54.695 thread=1 00:30:54.695 invalidate=1 00:30:54.695 rw=read 00:30:54.695 time_based=1 00:30:54.695 runtime=10 00:30:54.695 ioengine=libaio 00:30:54.695 direct=1 00:30:54.695 bs=4096 00:30:54.695 iodepth=1 00:30:54.695 norandommap=1 00:30:54.695 numjobs=1 00:30:54.695 00:30:54.695 [job0] 00:30:54.695 filename=/dev/nvme0n1 00:30:54.695 [job1] 00:30:54.695 filename=/dev/nvme0n2 00:30:54.695 [job2] 00:30:54.695 filename=/dev/nvme0n3 00:30:54.695 [job3] 00:30:54.695 filename=/dev/nvme0n4 00:30:54.695 Could not set queue depth (nvme0n1) 00:30:54.695 Could not set queue depth (nvme0n2) 00:30:54.695 Could not set queue depth (nvme0n3) 00:30:54.695 Could not set queue depth (nvme0n4) 00:30:54.952 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.952 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.952 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.952 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.952 fio-3.35 00:30:54.952 Starting 4 threads 00:30:58.230 16:44:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:58.230 16:44:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:58.230 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35246080, buflen=4096 00:30:58.230 fio: pid=2800695, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:30:58.230 16:44:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:58.230 16:44:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:58.230 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=2461696, buflen=4096 00:30:58.230 fio: pid=2800694, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:30:58.488 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:58.488 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:58.488 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17432576, buflen=4096 00:30:58.488 fio: pid=2800692, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:30:58.746 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:58.746 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:58.746 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46387200, buflen=4096 00:30:58.746 fio: pid=2800693, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:30:59.004 00:30:59.004 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2800692: Mon Jul 22 16:44:18 2024 00:30:59.004 read: IOPS=1192, BW=4770KiB/s (4884kB/s)(16.6MiB/3569msec) 00:30:59.004 slat (usec): min=5, max=27668, avg=24.72, stdev=505.74 00:30:59.004 clat (usec): min=212, max=42093, avg=802.62, stdev=4260.82 00:30:59.004 lat (usec): min=218, max=42107, avg=827.35, stdev=4290.11 00:30:59.004 clat percentiles (usec): 00:30:59.004 | 1.00th=[ 237], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 297], 00:30:59.004 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:30:59.004 | 70.00th=[ 347], 80.00th=[ 449], 90.00th=[ 482], 95.00th=[ 519], 00:30:59.004 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:59.004 | 99.99th=[42206] 00:30:59.004 bw ( KiB/s): min= 96, max=11760, per=16.69%, avg=4302.67, stdev=4663.43, samples=6 00:30:59.004 iops : min= 24, max= 2940, avg=1075.67, stdev=1165.86, samples=6 00:30:59.004 lat (usec) : 250=2.84%, 500=90.32%, 750=5.50%, 1000=0.14% 00:30:59.004 lat (msec) : 2=0.07%, 50=1.10% 00:30:59.004 cpu : usr=0.84%, sys=1.51%, ctx=4262, majf=0, minf=1 00:30:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 issued rwts: total=4257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.004 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2800693: Mon Jul 22 16:44:18 2024 00:30:59.004 read: IOPS=2944, BW=11.5MiB/s (12.1MB/s)(44.2MiB/3846msec) 00:30:59.004 slat (usec): min=5, max=13707, avg=13.41, stdev=209.68 00:30:59.004 clat (usec): min=204, max=50613, avg=320.96, stdev=872.84 00:30:59.004 lat (usec): min=210, max=54887, avg=334.36, stdev=951.76 00:30:59.004 clat percentiles (usec): 00:30:59.004 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:30:59.004 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:30:59.004 | 70.00th=[ 293], 80.00th=[ 330], 90.00th=[ 404], 95.00th=[ 461], 00:30:59.004 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 1029], 99.95th=[22676], 00:30:59.004 | 99.99th=[41157] 00:30:59.004 bw ( KiB/s): min= 7374, max=14664, per=46.66%, avg=12029.43, stdev=2978.98, samples=7 00:30:59.004 iops : min= 1843, max= 3666, avg=3007.29, stdev=744.87, samples=7 00:30:59.004 lat (usec) : 250=14.53%, 500=82.41%, 750=2.78%, 1000=0.15% 00:30:59.004 lat (msec) : 2=0.06%, 50=0.04%, 100=0.01% 00:30:59.004 cpu : usr=1.53%, sys=4.40%, ctx=11333, majf=0, minf=1 00:30:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 issued rwts: total=11326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.004 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2800694: Mon Jul 22 16:44:18 2024 00:30:59.004 read: IOPS=182, BW=729KiB/s (747kB/s)(2404KiB/3296msec) 00:30:59.004 slat (usec): min=5, max=9865, avg=25.88, stdev=401.75 00:30:59.004 clat (usec): min=238, max=42146, avg=5419.20, stdev=13547.63 00:30:59.004 lat (usec): min=245, max=50958, avg=5445.10, stdev=13599.35 00:30:59.004 clat percentiles (usec): 00:30:59.004 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 269], 00:30:59.004 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:30:59.004 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[41157], 95.00th=[41157], 00:30:59.004 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:59.004 | 99.99th=[42206] 00:30:59.004 bw ( KiB/s): min= 96, max= 4096, per=3.07%, avg=792.00, stdev=1620.15, samples=6 00:30:59.004 iops : min= 24, max= 1024, avg=198.00, stdev=405.04, samples=6 00:30:59.004 lat (usec) : 250=2.82%, 500=83.89%, 750=0.33%, 1000=0.17% 00:30:59.004 lat (msec) : 20=0.17%, 50=12.46% 00:30:59.004 cpu : usr=0.09%, sys=0.15%, ctx=604, majf=0, minf=1 00:30:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 issued rwts: total=602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.004 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2800695: Mon Jul 22 16:44:18 2024 00:30:59.004 read: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(33.6MiB/2982msec) 00:30:59.004 slat (nsec): min=6155, max=81015, avg=11583.20, stdev=8528.02 00:30:59.004 clat (usec): min=227, max=40641, avg=328.92, stdev=441.25 00:30:59.004 lat (usec): min=233, max=40648, avg=340.51, stdev=442.31 00:30:59.004 clat percentiles (usec): 00:30:59.004 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:30:59.004 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 322], 00:30:59.004 | 70.00th=[ 338], 80.00th=[ 396], 90.00th=[ 441], 95.00th=[ 474], 00:30:59.004 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 693], 99.95th=[ 857], 00:30:59.004 | 99.99th=[40633] 00:30:59.004 bw ( KiB/s): min= 8840, max=14712, per=43.55%, avg=11227.20, stdev=2317.56, samples=5 00:30:59.004 iops : min= 2210, max= 3678, avg=2806.80, stdev=579.39, samples=5 00:30:59.004 lat (usec) : 250=8.08%, 500=89.36%, 750=2.46%, 1000=0.06% 00:30:59.004 lat (msec) : 2=0.02%, 50=0.01% 00:30:59.004 cpu : usr=1.81%, sys=4.93%, ctx=8609, majf=0, minf=1 00:30:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.004 issued rwts: total=8606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.004 00:30:59.004 Run status group 0 (all jobs): 00:30:59.004 READ: bw=25.2MiB/s (26.4MB/s), 729KiB/s-11.5MiB/s (747kB/s-12.1MB/s), io=96.8MiB (102MB), run=2982-3846msec 00:30:59.004 00:30:59.004 Disk stats (read/write): 00:30:59.004 nvme0n1: ios=3779/0, merge=0/0, ticks=3218/0, in_queue=3218, util=94.45% 00:30:59.004 nvme0n2: ios=10712/0, merge=0/0, ticks=3353/0, in_queue=3353, util=95.74% 00:30:59.004 nvme0n3: ios=597/0, merge=0/0, ticks=3089/0, in_queue=3089, util=96.66% 00:30:59.004 nvme0n4: ios=8282/0, merge=0/0, ticks=3077/0, in_queue=3077, util=99.09% 00:30:59.004 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:59.004 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:59.263 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:59.263 16:44:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:59.520 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:59.520 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:59.778 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:59.778 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:00.036 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:00.036 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2800489 00:31:00.036 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:00.036 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:00.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:00.294 nvmf hotplug test: fio failed as expected 00:31:00.294 16:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.552 rmmod nvme_tcp 00:31:00.552 rmmod nvme_fabrics 00:31:00.552 rmmod nvme_keyring 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2798578 ']' 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2798578 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 2798578 ']' 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 2798578 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2798578 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2798578' 00:31:00.552 killing process with pid 2798578 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 2798578 00:31:00.552 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 2798578 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.811 16:44:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.713 16:44:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:02.713 00:31:02.713 real 0m23.837s 00:31:02.713 user 1m22.580s 00:31:02.713 sys 0m7.222s 00:31:02.713 16:44:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.713 16:44:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.713 ************************************ 00:31:02.713 END TEST nvmf_fio_target 00:31:02.713 ************************************ 00:31:02.971 16:44:22 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:02.971 16:44:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:02.971 16:44:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.971 16:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:02.971 ************************************ 00:31:02.971 START TEST nvmf_bdevio 00:31:02.971 ************************************ 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:02.971 * Looking for test storage... 00:31:02.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.971 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.972 16:44:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:05.511 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:05.511 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.511 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:05.512 Found net devices under 0000:82:00.0: cvl_0_0 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:05.512 Found net devices under 0000:82:00.1: cvl_0_1 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.512 16:44:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:05.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:31:05.512 00:31:05.512 --- 10.0.0.2 ping statistics --- 00:31:05.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.512 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:31:05.512 00:31:05.512 --- 10.0.0.1 ping statistics --- 00:31:05.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.512 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2803604 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2803604 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 2803604 ']' 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:05.512 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.512 [2024-07-22 16:44:25.087017] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:05.512 [2024-07-22 16:44:25.087091] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.512 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.770 [2024-07-22 16:44:25.162878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:05.770 [2024-07-22 16:44:25.248745] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.770 [2024-07-22 16:44:25.248793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.770 [2024-07-22 16:44:25.248822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.770 [2024-07-22 16:44:25.248833] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.770 [2024-07-22 16:44:25.248842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.770 [2024-07-22 16:44:25.248914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:05.770 [2024-07-22 16:44:25.249007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:05.770 [2024-07-22 16:44:25.249078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.770 [2024-07-22 16:44:25.249075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.770 [2024-07-22 16:44:25.409856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.770 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 Malloc0 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 [2024-07-22 16:44:25.462176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:06.028 { 00:31:06.028 "params": { 00:31:06.028 "name": "Nvme$subsystem", 00:31:06.028 "trtype": "$TEST_TRANSPORT", 00:31:06.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.028 "adrfam": "ipv4", 00:31:06.028 "trsvcid": "$NVMF_PORT", 00:31:06.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.028 "hdgst": ${hdgst:-false}, 00:31:06.028 "ddgst": ${ddgst:-false} 00:31:06.028 }, 00:31:06.028 "method": "bdev_nvme_attach_controller" 00:31:06.028 } 00:31:06.028 EOF 00:31:06.028 )") 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:31:06.028 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:31:06.029 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:31:06.029 16:44:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:06.029 "params": { 00:31:06.029 "name": "Nvme1", 00:31:06.029 "trtype": "tcp", 00:31:06.029 "traddr": "10.0.0.2", 00:31:06.029 "adrfam": "ipv4", 00:31:06.029 "trsvcid": "4420", 00:31:06.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.029 "hdgst": false, 00:31:06.029 "ddgst": false 00:31:06.029 }, 00:31:06.029 "method": "bdev_nvme_attach_controller" 00:31:06.029 }' 00:31:06.029 [2024-07-22 16:44:25.511258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:06.029 [2024-07-22 16:44:25.511350] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803674 ] 00:31:06.029 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.029 [2024-07-22 16:44:25.584977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.029 [2024-07-22 16:44:25.676077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.029 [2024-07-22 16:44:25.676130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.029 [2024-07-22 16:44:25.676133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.287 I/O targets: 00:31:06.287 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:06.287 00:31:06.287 00:31:06.287 CUnit - A unit testing framework for C - Version 2.1-3 00:31:06.287 http://cunit.sourceforge.net/ 00:31:06.287 00:31:06.287 00:31:06.287 Suite: bdevio tests on: Nvme1n1 00:31:06.544 Test: blockdev write read block ...passed 00:31:06.544 Test: blockdev write zeroes read block ...passed 00:31:06.544 Test: blockdev write zeroes read no split ...passed 00:31:06.544 Test: blockdev write zeroes read split ...passed 00:31:06.544 Test: blockdev write zeroes read split partial ...passed 00:31:06.544 Test: blockdev reset ...[2024-07-22 16:44:26.107985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:06.544 [2024-07-22 16:44:26.108097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2a00 (9): Bad file descriptor 00:31:06.802 [2024-07-22 16:44:26.204624] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:06.802 passed 00:31:06.802 Test: blockdev write read 8 blocks ...passed 00:31:06.802 Test: blockdev write read size > 128k ...passed 00:31:06.802 Test: blockdev write read invalid size ...passed 00:31:06.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:06.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:06.802 Test: blockdev write read max offset ...passed 00:31:06.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:06.802 Test: blockdev writev readv 8 blocks ...passed 00:31:06.802 Test: blockdev writev readv 30 x 1block ...passed 00:31:06.802 Test: blockdev writev readv block ...passed 00:31:06.802 Test: blockdev writev readv size > 128k ...passed 00:31:06.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:06.802 Test: blockdev comparev and writev ...[2024-07-22 16:44:26.426255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.426291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.426316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.426334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.426728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.426753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.426776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.426793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.427201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.427234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.427273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.427737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.427761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:06.802 [2024-07-22 16:44:26.427783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.802 [2024-07-22 16:44:26.427799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:07.060 passed 00:31:07.060 Test: blockdev nvme passthru rw ...passed 00:31:07.060 Test: blockdev nvme passthru vendor specific ...[2024-07-22 16:44:26.510284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.060 [2024-07-22 16:44:26.510311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:07.060 [2024-07-22 16:44:26.510480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.060 [2024-07-22 16:44:26.510504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:07.060 [2024-07-22 16:44:26.510662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.060 [2024-07-22 16:44:26.510685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:07.060 [2024-07-22 16:44:26.510849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.060 [2024-07-22 16:44:26.510872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:07.060 passed 00:31:07.060 Test: blockdev nvme admin passthru ...passed 00:31:07.060 Test: blockdev copy ...passed 00:31:07.060 00:31:07.060 Run Summary: Type Total Ran Passed Failed Inactive 00:31:07.060 suites 1 1 n/a 0 0 00:31:07.060 tests 23 23 23 0 0 00:31:07.060 asserts 152 152 152 0 n/a 00:31:07.060 00:31:07.060 Elapsed time = 1.349 seconds 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:07.318 rmmod nvme_tcp 00:31:07.318 rmmod nvme_fabrics 00:31:07.318 rmmod nvme_keyring 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2803604 ']' 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2803604 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 2803604 ']' 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 2803604 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2803604 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2803604' 00:31:07.318 killing process with pid 2803604 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 2803604 00:31:07.318 16:44:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 2803604 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.577 16:44:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.478 16:44:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.478 00:31:09.478 real 0m6.724s 00:31:09.478 user 0m10.339s 00:31:09.478 sys 0m2.423s 00:31:09.478 16:44:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:09.478 16:44:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:09.478 ************************************ 00:31:09.478 END TEST nvmf_bdevio 00:31:09.478 ************************************ 00:31:09.737 16:44:29 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:09.737 16:44:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:09.737 16:44:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:09.737 16:44:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.737 ************************************ 00:31:09.737 START TEST nvmf_auth_target 00:31:09.737 ************************************ 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:09.737 * Looking for test storage... 00:31:09.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:09.737 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.738 16:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:12.269 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:12.269 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:12.269 Found net devices under 0000:82:00.0: cvl_0_0 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:12.269 Found net devices under 0000:82:00.1: cvl_0_1 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:31:12.269 00:31:12.269 --- 10.0.0.2 ping statistics --- 00:31:12.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.269 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:31:12.269 00:31:12.269 --- 10.0.0.1 ping statistics --- 00:31:12.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.269 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.269 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2806108 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2806108 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2806108 ']' 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:12.270 16:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2806134 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01f1886cf1072e3b742f92b42c74537e18a4c090259b8196 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.m7T 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01f1886cf1072e3b742f92b42c74537e18a4c090259b8196 0 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01f1886cf1072e3b742f92b42c74537e18a4c090259b8196 0 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01f1886cf1072e3b742f92b42c74537e18a4c090259b8196 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:31:12.528 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.m7T 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.m7T 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.m7T 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6b3986c78507e50be5cbe20058f5890686280c693f4380f9dc39c15fb392e3bc 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VHv 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6b3986c78507e50be5cbe20058f5890686280c693f4380f9dc39c15fb392e3bc 3 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6b3986c78507e50be5cbe20058f5890686280c693f4380f9dc39c15fb392e3bc 3 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6b3986c78507e50be5cbe20058f5890686280c693f4380f9dc39c15fb392e3bc 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VHv 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VHv 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.VHv 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f7bb25dbb176153bc293ef42eeff6867 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.G7k 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f7bb25dbb176153bc293ef42eeff6867 1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f7bb25dbb176153bc293ef42eeff6867 1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f7bb25dbb176153bc293ef42eeff6867 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.G7k 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.G7k 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.G7k 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6474019cd0d18c011add6efc350c4226ea091a74cbd45077 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kUu 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6474019cd0d18c011add6efc350c4226ea091a74cbd45077 2 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6474019cd0d18c011add6efc350c4226ea091a74cbd45077 2 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6474019cd0d18c011add6efc350c4226ea091a74cbd45077 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kUu 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kUu 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.kUu 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.787 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=347db4d43c5275d5228f668c6eca1ed37d276d6e34e5e33c 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kKz 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 347db4d43c5275d5228f668c6eca1ed37d276d6e34e5e33c 2 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 347db4d43c5275d5228f668c6eca1ed37d276d6e34e5e33c 2 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=347db4d43c5275d5228f668c6eca1ed37d276d6e34e5e33c 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kKz 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kKz 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.kKz 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:12.788 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=683de27ccc2d78f958aff10a6a963933 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1A6 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 683de27ccc2d78f958aff10a6a963933 1 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 683de27ccc2d78f958aff10a6a963933 1 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=683de27ccc2d78f958aff10a6a963933 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1A6 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1A6 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.1A6 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=945e6d09b6649b88f6d2d8fd323b0c9082df50276685f0dd1fef698881d62cfd 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QaH 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 945e6d09b6649b88f6d2d8fd323b0c9082df50276685f0dd1fef698881d62cfd 3 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 945e6d09b6649b88f6d2d8fd323b0c9082df50276685f0dd1fef698881d62cfd 3 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=945e6d09b6649b88f6d2d8fd323b0c9082df50276685f0dd1fef698881d62cfd 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QaH 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QaH 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.QaH 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2806108 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2806108 ']' 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.046 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2806134 /var/tmp/host.sock 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2806134 ']' 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:31:13.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.304 16:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.m7T 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.m7T 00:31:13.560 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.m7T 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.VHv ]] 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VHv 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VHv 00:31:13.817 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VHv 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.G7k 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.G7k 00:31:14.073 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.G7k 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.kUu ]] 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kUu 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kUu 00:31:14.331 16:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kUu 00:31:14.588 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:31:14.588 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kKz 00:31:14.588 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.588 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.589 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.589 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kKz 00:31:14.589 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kKz 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.1A6 ]] 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1A6 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1A6 00:31:14.847 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1A6 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QaH 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.104 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QaH 00:31:15.105 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QaH 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:15.362 16:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.620 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.878 00:31:15.878 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:15.878 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:15.878 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:16.136 { 00:31:16.136 "cntlid": 1, 00:31:16.136 "qid": 0, 00:31:16.136 "state": "enabled", 00:31:16.136 "listen_address": { 00:31:16.136 "trtype": "TCP", 00:31:16.136 "adrfam": "IPv4", 00:31:16.136 "traddr": "10.0.0.2", 00:31:16.136 "trsvcid": "4420" 00:31:16.136 }, 00:31:16.136 "peer_address": { 00:31:16.136 "trtype": "TCP", 00:31:16.136 "adrfam": "IPv4", 00:31:16.136 "traddr": "10.0.0.1", 00:31:16.136 "trsvcid": "51002" 00:31:16.136 }, 00:31:16.136 "auth": { 00:31:16.136 "state": "completed", 00:31:16.136 "digest": "sha256", 00:31:16.136 "dhgroup": "null" 00:31:16.136 } 00:31:16.136 } 00:31:16.136 ]' 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:16.136 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:16.393 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:31:16.393 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:16.393 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:16.393 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:16.393 16:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:16.651 16:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:17.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:17.603 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:17.860 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:17.861 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:18.118 00:31:18.118 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:18.118 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:18.118 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:18.376 { 00:31:18.376 "cntlid": 3, 00:31:18.376 "qid": 0, 00:31:18.376 "state": "enabled", 00:31:18.376 "listen_address": { 00:31:18.376 "trtype": "TCP", 00:31:18.376 "adrfam": "IPv4", 00:31:18.376 "traddr": "10.0.0.2", 00:31:18.376 "trsvcid": "4420" 00:31:18.376 }, 00:31:18.376 "peer_address": { 00:31:18.376 "trtype": "TCP", 00:31:18.376 "adrfam": "IPv4", 00:31:18.376 "traddr": "10.0.0.1", 00:31:18.376 "trsvcid": "40868" 00:31:18.376 }, 00:31:18.376 "auth": { 00:31:18.376 "state": "completed", 00:31:18.376 "digest": "sha256", 00:31:18.376 "dhgroup": "null" 00:31:18.376 } 00:31:18.376 } 00:31:18.376 ]' 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:31:18.376 16:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:18.376 16:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:18.376 16:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:18.376 16:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:18.634 16:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:31:19.567 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:19.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:19.825 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.082 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.083 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.340 00:31:20.340 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:20.340 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:20.340 16:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:20.598 { 00:31:20.598 "cntlid": 5, 00:31:20.598 "qid": 0, 00:31:20.598 "state": "enabled", 00:31:20.598 "listen_address": { 00:31:20.598 "trtype": "TCP", 00:31:20.598 "adrfam": "IPv4", 00:31:20.598 "traddr": "10.0.0.2", 00:31:20.598 "trsvcid": "4420" 00:31:20.598 }, 00:31:20.598 "peer_address": { 00:31:20.598 "trtype": "TCP", 00:31:20.598 "adrfam": "IPv4", 00:31:20.598 "traddr": "10.0.0.1", 00:31:20.598 "trsvcid": "40910" 00:31:20.598 }, 00:31:20.598 "auth": { 00:31:20.598 "state": "completed", 00:31:20.598 "digest": "sha256", 00:31:20.598 "dhgroup": "null" 00:31:20.598 } 00:31:20.598 } 00:31:20.598 ]' 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:20.598 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:20.856 16:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:21.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:21.787 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:22.045 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:22.303 00:31:22.303 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:22.303 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:22.303 16:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:22.561 { 00:31:22.561 "cntlid": 7, 00:31:22.561 "qid": 0, 00:31:22.561 "state": "enabled", 00:31:22.561 "listen_address": { 00:31:22.561 "trtype": "TCP", 00:31:22.561 "adrfam": "IPv4", 00:31:22.561 "traddr": "10.0.0.2", 00:31:22.561 "trsvcid": "4420" 00:31:22.561 }, 00:31:22.561 "peer_address": { 00:31:22.561 "trtype": "TCP", 00:31:22.561 "adrfam": "IPv4", 00:31:22.561 "traddr": "10.0.0.1", 00:31:22.561 "trsvcid": "40926" 00:31:22.561 }, 00:31:22.561 "auth": { 00:31:22.561 "state": "completed", 00:31:22.561 "digest": "sha256", 00:31:22.561 "dhgroup": "null" 00:31:22.561 } 00:31:22.561 } 00:31:22.561 ]' 00:31:22.561 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:22.819 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:23.077 16:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:24.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:24.009 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:24.010 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:24.010 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:24.267 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:24.268 16:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:24.525 00:31:24.525 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:24.525 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:24.525 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:24.783 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:24.784 { 00:31:24.784 "cntlid": 9, 00:31:24.784 "qid": 0, 00:31:24.784 "state": "enabled", 00:31:24.784 "listen_address": { 00:31:24.784 "trtype": "TCP", 00:31:24.784 "adrfam": "IPv4", 00:31:24.784 "traddr": "10.0.0.2", 00:31:24.784 "trsvcid": "4420" 00:31:24.784 }, 00:31:24.784 "peer_address": { 00:31:24.784 "trtype": "TCP", 00:31:24.784 "adrfam": "IPv4", 00:31:24.784 "traddr": "10.0.0.1", 00:31:24.784 "trsvcid": "40950" 00:31:24.784 }, 00:31:24.784 "auth": { 00:31:24.784 "state": "completed", 00:31:24.784 "digest": "sha256", 00:31:24.784 "dhgroup": "ffdhe2048" 00:31:24.784 } 00:31:24.784 } 00:31:24.784 ]' 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:24.784 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:25.042 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:25.042 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:25.042 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:25.042 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:25.042 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:25.300 16:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:26.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.233 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.491 16:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.749 00:31:26.749 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:26.749 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:26.749 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:27.007 { 00:31:27.007 "cntlid": 11, 00:31:27.007 "qid": 0, 00:31:27.007 "state": "enabled", 00:31:27.007 "listen_address": { 00:31:27.007 "trtype": "TCP", 00:31:27.007 "adrfam": "IPv4", 00:31:27.007 "traddr": "10.0.0.2", 00:31:27.007 "trsvcid": "4420" 00:31:27.007 }, 00:31:27.007 "peer_address": { 00:31:27.007 "trtype": "TCP", 00:31:27.007 "adrfam": "IPv4", 00:31:27.007 "traddr": "10.0.0.1", 00:31:27.007 "trsvcid": "40964" 00:31:27.007 }, 00:31:27.007 "auth": { 00:31:27.007 "state": "completed", 00:31:27.007 "digest": "sha256", 00:31:27.007 "dhgroup": "ffdhe2048" 00:31:27.007 } 00:31:27.007 } 00:31:27.007 ]' 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:27.007 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:27.265 16:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:28.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.638 16:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.638 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.896 00:31:28.896 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:28.896 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:28.896 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:29.155 { 00:31:29.155 "cntlid": 13, 00:31:29.155 "qid": 0, 00:31:29.155 "state": "enabled", 00:31:29.155 "listen_address": { 00:31:29.155 "trtype": "TCP", 00:31:29.155 "adrfam": "IPv4", 00:31:29.155 "traddr": "10.0.0.2", 00:31:29.155 "trsvcid": "4420" 00:31:29.155 }, 00:31:29.155 "peer_address": { 00:31:29.155 "trtype": "TCP", 00:31:29.155 "adrfam": "IPv4", 00:31:29.155 "traddr": "10.0.0.1", 00:31:29.155 "trsvcid": "55358" 00:31:29.155 }, 00:31:29.155 "auth": { 00:31:29.155 "state": "completed", 00:31:29.155 "digest": "sha256", 00:31:29.155 "dhgroup": "ffdhe2048" 00:31:29.155 } 00:31:29.155 } 00:31:29.155 ]' 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:29.155 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:29.413 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:29.413 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:29.413 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:29.413 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:29.413 16:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:29.671 16:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:30.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:30.604 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:30.862 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:31.120 00:31:31.121 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:31.121 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:31.121 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.378 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:31.378 { 00:31:31.378 "cntlid": 15, 00:31:31.378 "qid": 0, 00:31:31.378 "state": "enabled", 00:31:31.378 "listen_address": { 00:31:31.378 "trtype": "TCP", 00:31:31.378 "adrfam": "IPv4", 00:31:31.378 "traddr": "10.0.0.2", 00:31:31.378 "trsvcid": "4420" 00:31:31.378 }, 00:31:31.378 "peer_address": { 00:31:31.378 "trtype": "TCP", 00:31:31.378 "adrfam": "IPv4", 00:31:31.378 "traddr": "10.0.0.1", 00:31:31.379 "trsvcid": "55386" 00:31:31.379 }, 00:31:31.379 "auth": { 00:31:31.379 "state": "completed", 00:31:31.379 "digest": "sha256", 00:31:31.379 "dhgroup": "ffdhe2048" 00:31:31.379 } 00:31:31.379 } 00:31:31.379 ]' 00:31:31.379 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:31.379 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:31.379 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:31.379 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:31.379 16:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:31.379 16:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:31.379 16:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:31.379 16:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:31.636 16:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:32.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:32.570 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.827 16:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.828 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.828 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.399 00:31:33.399 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:33.399 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:33.399 16:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:33.399 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.399 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:33.399 16:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.399 16:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:33.658 { 00:31:33.658 "cntlid": 17, 00:31:33.658 "qid": 0, 00:31:33.658 "state": "enabled", 00:31:33.658 "listen_address": { 00:31:33.658 "trtype": "TCP", 00:31:33.658 "adrfam": "IPv4", 00:31:33.658 "traddr": "10.0.0.2", 00:31:33.658 "trsvcid": "4420" 00:31:33.658 }, 00:31:33.658 "peer_address": { 00:31:33.658 "trtype": "TCP", 00:31:33.658 "adrfam": "IPv4", 00:31:33.658 "traddr": "10.0.0.1", 00:31:33.658 "trsvcid": "55418" 00:31:33.658 }, 00:31:33.658 "auth": { 00:31:33.658 "state": "completed", 00:31:33.658 "digest": "sha256", 00:31:33.658 "dhgroup": "ffdhe3072" 00:31:33.658 } 00:31:33.658 } 00:31:33.658 ]' 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:33.658 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:33.916 16:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:34.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.849 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.107 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.366 00:31:35.366 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:35.366 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:35.366 16:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:35.623 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.623 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:35.623 16:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.623 16:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.624 16:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.624 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:35.624 { 00:31:35.624 "cntlid": 19, 00:31:35.624 "qid": 0, 00:31:35.624 "state": "enabled", 00:31:35.624 "listen_address": { 00:31:35.624 "trtype": "TCP", 00:31:35.624 "adrfam": "IPv4", 00:31:35.624 "traddr": "10.0.0.2", 00:31:35.624 "trsvcid": "4420" 00:31:35.624 }, 00:31:35.624 "peer_address": { 00:31:35.624 "trtype": "TCP", 00:31:35.624 "adrfam": "IPv4", 00:31:35.624 "traddr": "10.0.0.1", 00:31:35.624 "trsvcid": "55454" 00:31:35.624 }, 00:31:35.624 "auth": { 00:31:35.624 "state": "completed", 00:31:35.624 "digest": "sha256", 00:31:35.624 "dhgroup": "ffdhe3072" 00:31:35.624 } 00:31:35.624 } 00:31:35.624 ]' 00:31:35.624 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:35.881 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:36.139 16:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:37.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.072 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.329 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.330 16:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.587 00:31:37.587 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:37.587 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:37.587 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:37.845 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:38.103 { 00:31:38.103 "cntlid": 21, 00:31:38.103 "qid": 0, 00:31:38.103 "state": "enabled", 00:31:38.103 "listen_address": { 00:31:38.103 "trtype": "TCP", 00:31:38.103 "adrfam": "IPv4", 00:31:38.103 "traddr": "10.0.0.2", 00:31:38.103 "trsvcid": "4420" 00:31:38.103 }, 00:31:38.103 "peer_address": { 00:31:38.103 "trtype": "TCP", 00:31:38.103 "adrfam": "IPv4", 00:31:38.103 "traddr": "10.0.0.1", 00:31:38.103 "trsvcid": "40792" 00:31:38.103 }, 00:31:38.103 "auth": { 00:31:38.103 "state": "completed", 00:31:38.103 "digest": "sha256", 00:31:38.103 "dhgroup": "ffdhe3072" 00:31:38.103 } 00:31:38.103 } 00:31:38.103 ]' 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:38.103 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:38.361 16:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:39.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:39.294 16:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:39.563 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:39.845 00:31:39.845 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:39.845 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:39.845 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:40.124 { 00:31:40.124 "cntlid": 23, 00:31:40.124 "qid": 0, 00:31:40.124 "state": "enabled", 00:31:40.124 "listen_address": { 00:31:40.124 "trtype": "TCP", 00:31:40.124 "adrfam": "IPv4", 00:31:40.124 "traddr": "10.0.0.2", 00:31:40.124 "trsvcid": "4420" 00:31:40.124 }, 00:31:40.124 "peer_address": { 00:31:40.124 "trtype": "TCP", 00:31:40.124 "adrfam": "IPv4", 00:31:40.124 "traddr": "10.0.0.1", 00:31:40.124 "trsvcid": "40820" 00:31:40.124 }, 00:31:40.124 "auth": { 00:31:40.124 "state": "completed", 00:31:40.124 "digest": "sha256", 00:31:40.124 "dhgroup": "ffdhe3072" 00:31:40.124 } 00:31:40.124 } 00:31:40.124 ]' 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:40.124 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:40.405 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:40.405 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:40.405 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:40.405 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:40.405 16:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:40.686 16:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:41.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:41.620 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:41.878 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:31:41.878 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:41.878 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:41.878 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:41.878 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.879 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.136 00:31:42.136 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:42.136 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:42.136 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:42.394 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.394 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:42.394 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.394 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.394 16:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.395 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:42.395 { 00:31:42.395 "cntlid": 25, 00:31:42.395 "qid": 0, 00:31:42.395 "state": "enabled", 00:31:42.395 "listen_address": { 00:31:42.395 "trtype": "TCP", 00:31:42.395 "adrfam": "IPv4", 00:31:42.395 "traddr": "10.0.0.2", 00:31:42.395 "trsvcid": "4420" 00:31:42.395 }, 00:31:42.395 "peer_address": { 00:31:42.395 "trtype": "TCP", 00:31:42.395 "adrfam": "IPv4", 00:31:42.395 "traddr": "10.0.0.1", 00:31:42.395 "trsvcid": "40836" 00:31:42.395 }, 00:31:42.395 "auth": { 00:31:42.395 "state": "completed", 00:31:42.395 "digest": "sha256", 00:31:42.395 "dhgroup": "ffdhe4096" 00:31:42.395 } 00:31:42.395 } 00:31:42.395 ]' 00:31:42.395 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:42.395 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:42.395 16:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:42.395 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:42.395 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:42.653 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:42.653 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:42.653 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:42.911 16:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:43.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:43.846 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.103 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.361 00:31:44.361 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:44.361 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:44.361 16:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:44.620 { 00:31:44.620 "cntlid": 27, 00:31:44.620 "qid": 0, 00:31:44.620 "state": "enabled", 00:31:44.620 "listen_address": { 00:31:44.620 "trtype": "TCP", 00:31:44.620 "adrfam": "IPv4", 00:31:44.620 "traddr": "10.0.0.2", 00:31:44.620 "trsvcid": "4420" 00:31:44.620 }, 00:31:44.620 "peer_address": { 00:31:44.620 "trtype": "TCP", 00:31:44.620 "adrfam": "IPv4", 00:31:44.620 "traddr": "10.0.0.1", 00:31:44.620 "trsvcid": "40862" 00:31:44.620 }, 00:31:44.620 "auth": { 00:31:44.620 "state": "completed", 00:31:44.620 "digest": "sha256", 00:31:44.620 "dhgroup": "ffdhe4096" 00:31:44.620 } 00:31:44.620 } 00:31:44.620 ]' 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:44.620 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:44.878 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:44.878 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:44.878 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:44.878 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:44.878 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:45.135 16:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:46.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:46.068 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.326 16:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.584 00:31:46.584 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:46.584 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:46.584 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:46.841 { 00:31:46.841 "cntlid": 29, 00:31:46.841 "qid": 0, 00:31:46.841 "state": "enabled", 00:31:46.841 "listen_address": { 00:31:46.841 "trtype": "TCP", 00:31:46.841 "adrfam": "IPv4", 00:31:46.841 "traddr": "10.0.0.2", 00:31:46.841 "trsvcid": "4420" 00:31:46.841 }, 00:31:46.841 "peer_address": { 00:31:46.841 "trtype": "TCP", 00:31:46.841 "adrfam": "IPv4", 00:31:46.841 "traddr": "10.0.0.1", 00:31:46.841 "trsvcid": "40892" 00:31:46.841 }, 00:31:46.841 "auth": { 00:31:46.841 "state": "completed", 00:31:46.841 "digest": "sha256", 00:31:46.841 "dhgroup": "ffdhe4096" 00:31:46.841 } 00:31:46.841 } 00:31:46.841 ]' 00:31:46.841 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:47.098 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:47.355 16:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:48.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:48.289 16:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:48.546 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:49.111 00:31:49.111 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:49.111 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:49.111 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:49.111 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.111 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:49.112 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.112 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:49.369 { 00:31:49.369 "cntlid": 31, 00:31:49.369 "qid": 0, 00:31:49.369 "state": "enabled", 00:31:49.369 "listen_address": { 00:31:49.369 "trtype": "TCP", 00:31:49.369 "adrfam": "IPv4", 00:31:49.369 "traddr": "10.0.0.2", 00:31:49.369 "trsvcid": "4420" 00:31:49.369 }, 00:31:49.369 "peer_address": { 00:31:49.369 "trtype": "TCP", 00:31:49.369 "adrfam": "IPv4", 00:31:49.369 "traddr": "10.0.0.1", 00:31:49.369 "trsvcid": "43356" 00:31:49.369 }, 00:31:49.369 "auth": { 00:31:49.369 "state": "completed", 00:31:49.369 "digest": "sha256", 00:31:49.369 "dhgroup": "ffdhe4096" 00:31:49.369 } 00:31:49.369 } 00:31:49.369 ]' 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:49.369 16:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:49.626 16:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:50.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:50.558 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.816 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.380 00:31:51.380 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:51.380 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:51.380 16:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:51.637 { 00:31:51.637 "cntlid": 33, 00:31:51.637 "qid": 0, 00:31:51.637 "state": "enabled", 00:31:51.637 "listen_address": { 00:31:51.637 "trtype": "TCP", 00:31:51.637 "adrfam": "IPv4", 00:31:51.637 "traddr": "10.0.0.2", 00:31:51.637 "trsvcid": "4420" 00:31:51.637 }, 00:31:51.637 "peer_address": { 00:31:51.637 "trtype": "TCP", 00:31:51.637 "adrfam": "IPv4", 00:31:51.637 "traddr": "10.0.0.1", 00:31:51.637 "trsvcid": "43380" 00:31:51.637 }, 00:31:51.637 "auth": { 00:31:51.637 "state": "completed", 00:31:51.637 "digest": "sha256", 00:31:51.637 "dhgroup": "ffdhe6144" 00:31:51.637 } 00:31:51.637 } 00:31:51.637 ]' 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:51.637 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:51.894 16:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:52.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:52.827 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.085 16:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.651 00:31:53.651 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:53.651 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:53.651 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:53.909 { 00:31:53.909 "cntlid": 35, 00:31:53.909 "qid": 0, 00:31:53.909 "state": "enabled", 00:31:53.909 "listen_address": { 00:31:53.909 "trtype": "TCP", 00:31:53.909 "adrfam": "IPv4", 00:31:53.909 "traddr": "10.0.0.2", 00:31:53.909 "trsvcid": "4420" 00:31:53.909 }, 00:31:53.909 "peer_address": { 00:31:53.909 "trtype": "TCP", 00:31:53.909 "adrfam": "IPv4", 00:31:53.909 "traddr": "10.0.0.1", 00:31:53.909 "trsvcid": "43400" 00:31:53.909 }, 00:31:53.909 "auth": { 00:31:53.909 "state": "completed", 00:31:53.909 "digest": "sha256", 00:31:53.909 "dhgroup": "ffdhe6144" 00:31:53.909 } 00:31:53.909 } 00:31:53.909 ]' 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:53.909 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:54.167 16:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:31:55.099 16:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:55.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:55.357 16:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.357 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:55.615 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.615 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.615 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.181 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.181 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:56.181 { 00:31:56.181 "cntlid": 37, 00:31:56.181 "qid": 0, 00:31:56.181 "state": "enabled", 00:31:56.182 "listen_address": { 00:31:56.182 "trtype": "TCP", 00:31:56.182 "adrfam": "IPv4", 00:31:56.182 "traddr": "10.0.0.2", 00:31:56.182 "trsvcid": "4420" 00:31:56.182 }, 00:31:56.182 "peer_address": { 00:31:56.182 "trtype": "TCP", 00:31:56.182 "adrfam": "IPv4", 00:31:56.182 "traddr": "10.0.0.1", 00:31:56.182 "trsvcid": "43430" 00:31:56.182 }, 00:31:56.182 "auth": { 00:31:56.182 "state": "completed", 00:31:56.182 "digest": "sha256", 00:31:56.182 "dhgroup": "ffdhe6144" 00:31:56.182 } 00:31:56.182 } 00:31:56.182 ]' 00:31:56.182 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:56.439 16:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:56.697 16:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:57.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.630 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:57.887 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:57.888 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:58.452 00:31:58.452 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:58.452 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:58.452 16:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:58.709 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.709 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:58.709 16:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:58.710 { 00:31:58.710 "cntlid": 39, 00:31:58.710 "qid": 0, 00:31:58.710 "state": "enabled", 00:31:58.710 "listen_address": { 00:31:58.710 "trtype": "TCP", 00:31:58.710 "adrfam": "IPv4", 00:31:58.710 "traddr": "10.0.0.2", 00:31:58.710 "trsvcid": "4420" 00:31:58.710 }, 00:31:58.710 "peer_address": { 00:31:58.710 "trtype": "TCP", 00:31:58.710 "adrfam": "IPv4", 00:31:58.710 "traddr": "10.0.0.1", 00:31:58.710 "trsvcid": "42196" 00:31:58.710 }, 00:31:58.710 "auth": { 00:31:58.710 "state": "completed", 00:31:58.710 "digest": "sha256", 00:31:58.710 "dhgroup": "ffdhe6144" 00:31:58.710 } 00:31:58.710 } 00:31:58.710 ]' 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:58.710 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:58.968 16:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:00.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.341 16:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.274 00:32:01.274 16:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:01.274 16:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:01.274 16:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:01.531 { 00:32:01.531 "cntlid": 41, 00:32:01.531 "qid": 0, 00:32:01.531 "state": "enabled", 00:32:01.531 "listen_address": { 00:32:01.531 "trtype": "TCP", 00:32:01.531 "adrfam": "IPv4", 00:32:01.531 "traddr": "10.0.0.2", 00:32:01.531 "trsvcid": "4420" 00:32:01.531 }, 00:32:01.531 "peer_address": { 00:32:01.531 "trtype": "TCP", 00:32:01.531 "adrfam": "IPv4", 00:32:01.531 "traddr": "10.0.0.1", 00:32:01.531 "trsvcid": "42224" 00:32:01.531 }, 00:32:01.531 "auth": { 00:32:01.531 "state": "completed", 00:32:01.531 "digest": "sha256", 00:32:01.531 "dhgroup": "ffdhe8192" 00:32:01.531 } 00:32:01.531 } 00:32:01.531 ]' 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:01.531 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:01.789 16:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:02.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:02.725 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.983 16:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.914 00:32:03.914 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:03.914 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:03.914 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:04.172 { 00:32:04.172 "cntlid": 43, 00:32:04.172 "qid": 0, 00:32:04.172 "state": "enabled", 00:32:04.172 "listen_address": { 00:32:04.172 "trtype": "TCP", 00:32:04.172 "adrfam": "IPv4", 00:32:04.172 "traddr": "10.0.0.2", 00:32:04.172 "trsvcid": "4420" 00:32:04.172 }, 00:32:04.172 "peer_address": { 00:32:04.172 "trtype": "TCP", 00:32:04.172 "adrfam": "IPv4", 00:32:04.172 "traddr": "10.0.0.1", 00:32:04.172 "trsvcid": "42262" 00:32:04.172 }, 00:32:04.172 "auth": { 00:32:04.172 "state": "completed", 00:32:04.172 "digest": "sha256", 00:32:04.172 "dhgroup": "ffdhe8192" 00:32:04.172 } 00:32:04.172 } 00:32:04.172 ]' 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:04.172 16:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:04.430 16:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:05.362 16:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:05.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:05.362 16:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:05.363 16:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.363 16:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.363 16:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.363 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:05.363 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.363 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.929 16:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.863 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:06.863 { 00:32:06.863 "cntlid": 45, 00:32:06.863 "qid": 0, 00:32:06.863 "state": "enabled", 00:32:06.863 "listen_address": { 00:32:06.863 "trtype": "TCP", 00:32:06.863 "adrfam": "IPv4", 00:32:06.863 "traddr": "10.0.0.2", 00:32:06.863 "trsvcid": "4420" 00:32:06.863 }, 00:32:06.863 "peer_address": { 00:32:06.863 "trtype": "TCP", 00:32:06.863 "adrfam": "IPv4", 00:32:06.863 "traddr": "10.0.0.1", 00:32:06.863 "trsvcid": "42294" 00:32:06.863 }, 00:32:06.863 "auth": { 00:32:06.863 "state": "completed", 00:32:06.863 "digest": "sha256", 00:32:06.863 "dhgroup": "ffdhe8192" 00:32:06.863 } 00:32:06.863 } 00:32:06.863 ]' 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:06.863 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:07.121 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:07.121 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:07.121 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:07.121 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:07.121 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:07.379 16:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:08.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.312 16:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:08.570 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:09.502 00:32:09.502 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:09.502 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:09.502 16:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:09.760 { 00:32:09.760 "cntlid": 47, 00:32:09.760 "qid": 0, 00:32:09.760 "state": "enabled", 00:32:09.760 "listen_address": { 00:32:09.760 "trtype": "TCP", 00:32:09.760 "adrfam": "IPv4", 00:32:09.760 "traddr": "10.0.0.2", 00:32:09.760 "trsvcid": "4420" 00:32:09.760 }, 00:32:09.760 "peer_address": { 00:32:09.760 "trtype": "TCP", 00:32:09.760 "adrfam": "IPv4", 00:32:09.760 "traddr": "10.0.0.1", 00:32:09.760 "trsvcid": "42780" 00:32:09.760 }, 00:32:09.760 "auth": { 00:32:09.760 "state": "completed", 00:32:09.760 "digest": "sha256", 00:32:09.760 "dhgroup": "ffdhe8192" 00:32:09.760 } 00:32:09.760 } 00:32:09.760 ]' 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:09.760 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:10.018 16:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:10.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:10.952 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.209 16:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.774 00:32:11.774 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:11.774 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:11.774 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:12.032 { 00:32:12.032 "cntlid": 49, 00:32:12.032 "qid": 0, 00:32:12.032 "state": "enabled", 00:32:12.032 "listen_address": { 00:32:12.032 "trtype": "TCP", 00:32:12.032 "adrfam": "IPv4", 00:32:12.032 "traddr": "10.0.0.2", 00:32:12.032 "trsvcid": "4420" 00:32:12.032 }, 00:32:12.032 "peer_address": { 00:32:12.032 "trtype": "TCP", 00:32:12.032 "adrfam": "IPv4", 00:32:12.032 "traddr": "10.0.0.1", 00:32:12.032 "trsvcid": "42806" 00:32:12.032 }, 00:32:12.032 "auth": { 00:32:12.032 "state": "completed", 00:32:12.032 "digest": "sha384", 00:32:12.032 "dhgroup": "null" 00:32:12.032 } 00:32:12.032 } 00:32:12.032 ]' 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:12.032 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:12.289 16:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:13.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:13.221 16:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.478 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.043 00:32:14.043 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:14.043 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:14.043 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:14.301 { 00:32:14.301 "cntlid": 51, 00:32:14.301 "qid": 0, 00:32:14.301 "state": "enabled", 00:32:14.301 "listen_address": { 00:32:14.301 "trtype": "TCP", 00:32:14.301 "adrfam": "IPv4", 00:32:14.301 "traddr": "10.0.0.2", 00:32:14.301 "trsvcid": "4420" 00:32:14.301 }, 00:32:14.301 "peer_address": { 00:32:14.301 "trtype": "TCP", 00:32:14.301 "adrfam": "IPv4", 00:32:14.301 "traddr": "10.0.0.1", 00:32:14.301 "trsvcid": "42826" 00:32:14.301 }, 00:32:14.301 "auth": { 00:32:14.301 "state": "completed", 00:32:14.301 "digest": "sha384", 00:32:14.301 "dhgroup": "null" 00:32:14.301 } 00:32:14.301 } 00:32:14.301 ]' 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:14.301 16:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:14.558 16:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:15.489 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:15.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:15.490 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.747 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.311 00:32:16.312 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:16.312 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:16.312 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:16.569 { 00:32:16.569 "cntlid": 53, 00:32:16.569 "qid": 0, 00:32:16.569 "state": "enabled", 00:32:16.569 "listen_address": { 00:32:16.569 "trtype": "TCP", 00:32:16.569 "adrfam": "IPv4", 00:32:16.569 "traddr": "10.0.0.2", 00:32:16.569 "trsvcid": "4420" 00:32:16.569 }, 00:32:16.569 "peer_address": { 00:32:16.569 "trtype": "TCP", 00:32:16.569 "adrfam": "IPv4", 00:32:16.569 "traddr": "10.0.0.1", 00:32:16.569 "trsvcid": "42844" 00:32:16.569 }, 00:32:16.569 "auth": { 00:32:16.569 "state": "completed", 00:32:16.569 "digest": "sha384", 00:32:16.569 "dhgroup": "null" 00:32:16.569 } 00:32:16.569 } 00:32:16.569 ]' 00:32:16.569 16:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:16.569 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:16.827 16:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:17.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:17.760 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.018 16:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.019 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:18.019 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:18.585 00:32:18.585 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:18.585 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:18.585 16:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.842 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:18.842 { 00:32:18.842 "cntlid": 55, 00:32:18.842 "qid": 0, 00:32:18.843 "state": "enabled", 00:32:18.843 "listen_address": { 00:32:18.843 "trtype": "TCP", 00:32:18.843 "adrfam": "IPv4", 00:32:18.843 "traddr": "10.0.0.2", 00:32:18.843 "trsvcid": "4420" 00:32:18.843 }, 00:32:18.843 "peer_address": { 00:32:18.843 "trtype": "TCP", 00:32:18.843 "adrfam": "IPv4", 00:32:18.843 "traddr": "10.0.0.1", 00:32:18.843 "trsvcid": "60814" 00:32:18.843 }, 00:32:18.843 "auth": { 00:32:18.843 "state": "completed", 00:32:18.843 "digest": "sha384", 00:32:18.843 "dhgroup": "null" 00:32:18.843 } 00:32:18.843 } 00:32:18.843 ]' 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:18.843 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:19.101 16:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:20.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.035 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.293 16:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.551 00:32:20.808 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:20.808 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:20.808 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:21.066 { 00:32:21.066 "cntlid": 57, 00:32:21.066 "qid": 0, 00:32:21.066 "state": "enabled", 00:32:21.066 "listen_address": { 00:32:21.066 "trtype": "TCP", 00:32:21.066 "adrfam": "IPv4", 00:32:21.066 "traddr": "10.0.0.2", 00:32:21.066 "trsvcid": "4420" 00:32:21.066 }, 00:32:21.066 "peer_address": { 00:32:21.066 "trtype": "TCP", 00:32:21.066 "adrfam": "IPv4", 00:32:21.066 "traddr": "10.0.0.1", 00:32:21.066 "trsvcid": "60844" 00:32:21.066 }, 00:32:21.066 "auth": { 00:32:21.066 "state": "completed", 00:32:21.066 "digest": "sha384", 00:32:21.066 "dhgroup": "ffdhe2048" 00:32:21.066 } 00:32:21.066 } 00:32:21.066 ]' 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:21.066 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:21.324 16:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:22.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.257 16:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.514 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:32:22.514 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.515 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:23.080 00:32:23.080 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:23.080 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:23.080 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.337 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:23.337 { 00:32:23.337 "cntlid": 59, 00:32:23.337 "qid": 0, 00:32:23.337 "state": "enabled", 00:32:23.337 "listen_address": { 00:32:23.337 "trtype": "TCP", 00:32:23.337 "adrfam": "IPv4", 00:32:23.337 "traddr": "10.0.0.2", 00:32:23.337 "trsvcid": "4420" 00:32:23.337 }, 00:32:23.337 "peer_address": { 00:32:23.337 "trtype": "TCP", 00:32:23.338 "adrfam": "IPv4", 00:32:23.338 "traddr": "10.0.0.1", 00:32:23.338 "trsvcid": "60860" 00:32:23.338 }, 00:32:23.338 "auth": { 00:32:23.338 "state": "completed", 00:32:23.338 "digest": "sha384", 00:32:23.338 "dhgroup": "ffdhe2048" 00:32:23.338 } 00:32:23.338 } 00:32:23.338 ]' 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:23.338 16:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:23.594 16:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:24.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.526 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.784 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.042 00:32:25.042 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:25.042 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:25.042 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:25.300 { 00:32:25.300 "cntlid": 61, 00:32:25.300 "qid": 0, 00:32:25.300 "state": "enabled", 00:32:25.300 "listen_address": { 00:32:25.300 "trtype": "TCP", 00:32:25.300 "adrfam": "IPv4", 00:32:25.300 "traddr": "10.0.0.2", 00:32:25.300 "trsvcid": "4420" 00:32:25.300 }, 00:32:25.300 "peer_address": { 00:32:25.300 "trtype": "TCP", 00:32:25.300 "adrfam": "IPv4", 00:32:25.300 "traddr": "10.0.0.1", 00:32:25.300 "trsvcid": "60894" 00:32:25.300 }, 00:32:25.300 "auth": { 00:32:25.300 "state": "completed", 00:32:25.300 "digest": "sha384", 00:32:25.300 "dhgroup": "ffdhe2048" 00:32:25.300 } 00:32:25.300 } 00:32:25.300 ]' 00:32:25.300 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:25.558 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:25.558 16:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:25.558 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:25.558 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:25.558 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:25.558 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:25.558 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:25.816 16:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:26.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:26.749 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:27.007 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:27.573 00:32:27.573 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:27.573 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:27.573 16:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:27.573 { 00:32:27.573 "cntlid": 63, 00:32:27.573 "qid": 0, 00:32:27.573 "state": "enabled", 00:32:27.573 "listen_address": { 00:32:27.573 "trtype": "TCP", 00:32:27.573 "adrfam": "IPv4", 00:32:27.573 "traddr": "10.0.0.2", 00:32:27.573 "trsvcid": "4420" 00:32:27.573 }, 00:32:27.573 "peer_address": { 00:32:27.573 "trtype": "TCP", 00:32:27.573 "adrfam": "IPv4", 00:32:27.573 "traddr": "10.0.0.1", 00:32:27.573 "trsvcid": "56786" 00:32:27.573 }, 00:32:27.573 "auth": { 00:32:27.573 "state": "completed", 00:32:27.573 "digest": "sha384", 00:32:27.573 "dhgroup": "ffdhe2048" 00:32:27.573 } 00:32:27.573 } 00:32:27.573 ]' 00:32:27.573 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:27.830 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:27.830 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:27.830 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:27.830 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:27.831 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:27.831 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:27.831 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:28.088 16:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:29.020 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:29.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:29.020 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:29.020 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:29.021 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.277 16:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.533 00:32:29.534 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:29.534 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:29.534 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:29.790 { 00:32:29.790 "cntlid": 65, 00:32:29.790 "qid": 0, 00:32:29.790 "state": "enabled", 00:32:29.790 "listen_address": { 00:32:29.790 "trtype": "TCP", 00:32:29.790 "adrfam": "IPv4", 00:32:29.790 "traddr": "10.0.0.2", 00:32:29.790 "trsvcid": "4420" 00:32:29.790 }, 00:32:29.790 "peer_address": { 00:32:29.790 "trtype": "TCP", 00:32:29.790 "adrfam": "IPv4", 00:32:29.790 "traddr": "10.0.0.1", 00:32:29.790 "trsvcid": "56808" 00:32:29.790 }, 00:32:29.790 "auth": { 00:32:29.790 "state": "completed", 00:32:29.790 "digest": "sha384", 00:32:29.790 "dhgroup": "ffdhe3072" 00:32:29.790 } 00:32:29.790 } 00:32:29.790 ]' 00:32:29.790 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:30.048 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:30.306 16:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:31.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.238 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.496 16:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.753 00:32:31.753 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:31.753 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:31.753 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:32.010 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.010 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:32.011 16:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.011 16:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:32.011 16:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.011 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:32.011 { 00:32:32.011 "cntlid": 67, 00:32:32.011 "qid": 0, 00:32:32.011 "state": "enabled", 00:32:32.011 "listen_address": { 00:32:32.011 "trtype": "TCP", 00:32:32.011 "adrfam": "IPv4", 00:32:32.011 "traddr": "10.0.0.2", 00:32:32.011 "trsvcid": "4420" 00:32:32.011 }, 00:32:32.011 "peer_address": { 00:32:32.011 "trtype": "TCP", 00:32:32.011 "adrfam": "IPv4", 00:32:32.011 "traddr": "10.0.0.1", 00:32:32.011 "trsvcid": "56832" 00:32:32.011 }, 00:32:32.011 "auth": { 00:32:32.011 "state": "completed", 00:32:32.011 "digest": "sha384", 00:32:32.011 "dhgroup": "ffdhe3072" 00:32:32.011 } 00:32:32.011 } 00:32:32.011 ]' 00:32:32.011 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:32.268 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:32.526 16:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:33.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:33.467 16:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.724 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.981 00:32:33.982 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:33.982 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:33.982 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:34.256 { 00:32:34.256 "cntlid": 69, 00:32:34.256 "qid": 0, 00:32:34.256 "state": "enabled", 00:32:34.256 "listen_address": { 00:32:34.256 "trtype": "TCP", 00:32:34.256 "adrfam": "IPv4", 00:32:34.256 "traddr": "10.0.0.2", 00:32:34.256 "trsvcid": "4420" 00:32:34.256 }, 00:32:34.256 "peer_address": { 00:32:34.256 "trtype": "TCP", 00:32:34.256 "adrfam": "IPv4", 00:32:34.256 "traddr": "10.0.0.1", 00:32:34.256 "trsvcid": "56872" 00:32:34.256 }, 00:32:34.256 "auth": { 00:32:34.256 "state": "completed", 00:32:34.256 "digest": "sha384", 00:32:34.256 "dhgroup": "ffdhe3072" 00:32:34.256 } 00:32:34.256 } 00:32:34.256 ]' 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:34.256 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:34.518 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:34.518 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:34.518 16:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:34.776 16:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:35.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.708 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:35.966 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:36.224 00:32:36.224 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:36.224 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:36.224 16:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:36.481 { 00:32:36.481 "cntlid": 71, 00:32:36.481 "qid": 0, 00:32:36.481 "state": "enabled", 00:32:36.481 "listen_address": { 00:32:36.481 "trtype": "TCP", 00:32:36.481 "adrfam": "IPv4", 00:32:36.481 "traddr": "10.0.0.2", 00:32:36.481 "trsvcid": "4420" 00:32:36.481 }, 00:32:36.481 "peer_address": { 00:32:36.481 "trtype": "TCP", 00:32:36.481 "adrfam": "IPv4", 00:32:36.481 "traddr": "10.0.0.1", 00:32:36.481 "trsvcid": "56888" 00:32:36.481 }, 00:32:36.481 "auth": { 00:32:36.481 "state": "completed", 00:32:36.481 "digest": "sha384", 00:32:36.481 "dhgroup": "ffdhe3072" 00:32:36.481 } 00:32:36.481 } 00:32:36.481 ]' 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:36.481 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:36.739 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:36.739 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:36.739 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:36.998 16:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:37.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:37.931 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.189 16:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.446 00:32:38.730 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:38.730 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:38.730 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:38.731 { 00:32:38.731 "cntlid": 73, 00:32:38.731 "qid": 0, 00:32:38.731 "state": "enabled", 00:32:38.731 "listen_address": { 00:32:38.731 "trtype": "TCP", 00:32:38.731 "adrfam": "IPv4", 00:32:38.731 "traddr": "10.0.0.2", 00:32:38.731 "trsvcid": "4420" 00:32:38.731 }, 00:32:38.731 "peer_address": { 00:32:38.731 "trtype": "TCP", 00:32:38.731 "adrfam": "IPv4", 00:32:38.731 "traddr": "10.0.0.1", 00:32:38.731 "trsvcid": "37166" 00:32:38.731 }, 00:32:38.731 "auth": { 00:32:38.731 "state": "completed", 00:32:38.731 "digest": "sha384", 00:32:38.731 "dhgroup": "ffdhe4096" 00:32:38.731 } 00:32:38.731 } 00:32:38.731 ]' 00:32:38.731 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:38.988 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:38.989 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:39.247 16:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:40.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.180 16:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.438 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.004 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:41.004 { 00:32:41.004 "cntlid": 75, 00:32:41.004 "qid": 0, 00:32:41.004 "state": "enabled", 00:32:41.004 "listen_address": { 00:32:41.004 "trtype": "TCP", 00:32:41.004 "adrfam": "IPv4", 00:32:41.004 "traddr": "10.0.0.2", 00:32:41.004 "trsvcid": "4420" 00:32:41.004 }, 00:32:41.004 "peer_address": { 00:32:41.004 "trtype": "TCP", 00:32:41.004 "adrfam": "IPv4", 00:32:41.004 "traddr": "10.0.0.1", 00:32:41.004 "trsvcid": "37198" 00:32:41.004 }, 00:32:41.004 "auth": { 00:32:41.004 "state": "completed", 00:32:41.004 "digest": "sha384", 00:32:41.004 "dhgroup": "ffdhe4096" 00:32:41.004 } 00:32:41.004 } 00:32:41.004 ]' 00:32:41.004 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:41.263 16:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:41.520 16:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:42.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.454 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.712 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.347 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:43.347 { 00:32:43.347 "cntlid": 77, 00:32:43.347 "qid": 0, 00:32:43.347 "state": "enabled", 00:32:43.347 "listen_address": { 00:32:43.347 "trtype": "TCP", 00:32:43.347 "adrfam": "IPv4", 00:32:43.347 "traddr": "10.0.0.2", 00:32:43.347 "trsvcid": "4420" 00:32:43.347 }, 00:32:43.347 "peer_address": { 00:32:43.347 "trtype": "TCP", 00:32:43.347 "adrfam": "IPv4", 00:32:43.347 "traddr": "10.0.0.1", 00:32:43.347 "trsvcid": "37238" 00:32:43.347 }, 00:32:43.347 "auth": { 00:32:43.347 "state": "completed", 00:32:43.347 "digest": "sha384", 00:32:43.347 "dhgroup": "ffdhe4096" 00:32:43.347 } 00:32:43.347 } 00:32:43.347 ]' 00:32:43.347 16:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:43.623 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:43.902 16:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:44.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.856 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:45.114 16:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:45.371 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:45.630 { 00:32:45.630 "cntlid": 79, 00:32:45.630 "qid": 0, 00:32:45.630 "state": "enabled", 00:32:45.630 "listen_address": { 00:32:45.630 "trtype": "TCP", 00:32:45.630 "adrfam": "IPv4", 00:32:45.630 "traddr": "10.0.0.2", 00:32:45.630 "trsvcid": "4420" 00:32:45.630 }, 00:32:45.630 "peer_address": { 00:32:45.630 "trtype": "TCP", 00:32:45.630 "adrfam": "IPv4", 00:32:45.630 "traddr": "10.0.0.1", 00:32:45.630 "trsvcid": "37260" 00:32:45.630 }, 00:32:45.630 "auth": { 00:32:45.630 "state": "completed", 00:32:45.630 "digest": "sha384", 00:32:45.630 "dhgroup": "ffdhe4096" 00:32:45.630 } 00:32:45.630 } 00:32:45.630 ]' 00:32:45.630 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:45.888 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:45.888 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:45.888 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:45.888 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:45.888 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:45.889 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:45.889 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:46.147 16:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:47.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.080 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.338 16:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.903 00:32:47.903 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:47.903 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:47.903 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:48.161 { 00:32:48.161 "cntlid": 81, 00:32:48.161 "qid": 0, 00:32:48.161 "state": "enabled", 00:32:48.161 "listen_address": { 00:32:48.161 "trtype": "TCP", 00:32:48.161 "adrfam": "IPv4", 00:32:48.161 "traddr": "10.0.0.2", 00:32:48.161 "trsvcid": "4420" 00:32:48.161 }, 00:32:48.161 "peer_address": { 00:32:48.161 "trtype": "TCP", 00:32:48.161 "adrfam": "IPv4", 00:32:48.161 "traddr": "10.0.0.1", 00:32:48.161 "trsvcid": "39266" 00:32:48.161 }, 00:32:48.161 "auth": { 00:32:48.161 "state": "completed", 00:32:48.161 "digest": "sha384", 00:32:48.161 "dhgroup": "ffdhe6144" 00:32:48.161 } 00:32:48.161 } 00:32:48.161 ]' 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:48.161 16:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:48.727 16:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:49.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.696 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.954 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.520 00:32:50.520 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:50.520 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:50.520 16:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:50.520 { 00:32:50.520 "cntlid": 83, 00:32:50.520 "qid": 0, 00:32:50.520 "state": "enabled", 00:32:50.520 "listen_address": { 00:32:50.520 "trtype": "TCP", 00:32:50.520 "adrfam": "IPv4", 00:32:50.520 "traddr": "10.0.0.2", 00:32:50.520 "trsvcid": "4420" 00:32:50.520 }, 00:32:50.520 "peer_address": { 00:32:50.520 "trtype": "TCP", 00:32:50.520 "adrfam": "IPv4", 00:32:50.520 "traddr": "10.0.0.1", 00:32:50.520 "trsvcid": "39280" 00:32:50.520 }, 00:32:50.520 "auth": { 00:32:50.520 "state": "completed", 00:32:50.520 "digest": "sha384", 00:32:50.520 "dhgroup": "ffdhe6144" 00:32:50.520 } 00:32:50.520 } 00:32:50.520 ]' 00:32:50.520 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:50.777 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:51.036 16:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:51.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.970 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.228 16:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.793 00:32:53.050 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:53.050 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:53.050 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:53.307 { 00:32:53.307 "cntlid": 85, 00:32:53.307 "qid": 0, 00:32:53.307 "state": "enabled", 00:32:53.307 "listen_address": { 00:32:53.307 "trtype": "TCP", 00:32:53.307 "adrfam": "IPv4", 00:32:53.307 "traddr": "10.0.0.2", 00:32:53.307 "trsvcid": "4420" 00:32:53.307 }, 00:32:53.307 "peer_address": { 00:32:53.307 "trtype": "TCP", 00:32:53.307 "adrfam": "IPv4", 00:32:53.307 "traddr": "10.0.0.1", 00:32:53.307 "trsvcid": "39318" 00:32:53.307 }, 00:32:53.307 "auth": { 00:32:53.307 "state": "completed", 00:32:53.307 "digest": "sha384", 00:32:53.307 "dhgroup": "ffdhe6144" 00:32:53.307 } 00:32:53.307 } 00:32:53.307 ]' 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:53.307 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:53.308 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:53.308 16:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:53.564 16:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:54.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.497 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:54.755 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:55.321 00:32:55.321 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:55.321 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:55.321 16:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:55.579 { 00:32:55.579 "cntlid": 87, 00:32:55.579 "qid": 0, 00:32:55.579 "state": "enabled", 00:32:55.579 "listen_address": { 00:32:55.579 "trtype": "TCP", 00:32:55.579 "adrfam": "IPv4", 00:32:55.579 "traddr": "10.0.0.2", 00:32:55.579 "trsvcid": "4420" 00:32:55.579 }, 00:32:55.579 "peer_address": { 00:32:55.579 "trtype": "TCP", 00:32:55.579 "adrfam": "IPv4", 00:32:55.579 "traddr": "10.0.0.1", 00:32:55.579 "trsvcid": "39344" 00:32:55.579 }, 00:32:55.579 "auth": { 00:32:55.579 "state": "completed", 00:32:55.579 "digest": "sha384", 00:32:55.579 "dhgroup": "ffdhe6144" 00:32:55.579 } 00:32:55.579 } 00:32:55.579 ]' 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:55.579 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:55.837 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:55.837 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:55.837 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:55.837 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:55.837 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:56.095 16:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:32:57.028 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:57.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:57.028 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:57.028 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.029 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.287 16:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.220 00:32:58.220 16:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:58.220 16:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:58.220 16:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:58.478 16:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.478 16:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:58.478 16:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:58.478 { 00:32:58.478 "cntlid": 89, 00:32:58.478 "qid": 0, 00:32:58.478 "state": "enabled", 00:32:58.478 "listen_address": { 00:32:58.478 "trtype": "TCP", 00:32:58.478 "adrfam": "IPv4", 00:32:58.478 "traddr": "10.0.0.2", 00:32:58.478 "trsvcid": "4420" 00:32:58.478 }, 00:32:58.478 "peer_address": { 00:32:58.478 "trtype": "TCP", 00:32:58.478 "adrfam": "IPv4", 00:32:58.478 "traddr": "10.0.0.1", 00:32:58.478 "trsvcid": "37406" 00:32:58.478 }, 00:32:58.478 "auth": { 00:32:58.478 "state": "completed", 00:32:58.478 "digest": "sha384", 00:32:58.478 "dhgroup": "ffdhe8192" 00:32:58.478 } 00:32:58.478 } 00:32:58.478 ]' 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:58.478 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:58.736 16:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:32:59.669 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:59.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:59.669 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:59.669 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.669 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.926 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.926 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:59.926 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.926 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.184 16:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.118 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:01.118 { 00:33:01.118 "cntlid": 91, 00:33:01.118 "qid": 0, 00:33:01.118 "state": "enabled", 00:33:01.118 "listen_address": { 00:33:01.118 "trtype": "TCP", 00:33:01.118 "adrfam": "IPv4", 00:33:01.118 "traddr": "10.0.0.2", 00:33:01.118 "trsvcid": "4420" 00:33:01.118 }, 00:33:01.118 "peer_address": { 00:33:01.118 "trtype": "TCP", 00:33:01.118 "adrfam": "IPv4", 00:33:01.118 "traddr": "10.0.0.1", 00:33:01.118 "trsvcid": "37448" 00:33:01.118 }, 00:33:01.118 "auth": { 00:33:01.118 "state": "completed", 00:33:01.118 "digest": "sha384", 00:33:01.118 "dhgroup": "ffdhe8192" 00:33:01.118 } 00:33:01.118 } 00:33:01.118 ]' 00:33:01.118 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:01.376 16:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:01.633 16:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:02.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.566 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.823 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.824 16:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.824 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.824 16:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.756 00:33:03.756 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:03.756 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:03.756 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:04.013 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:04.014 { 00:33:04.014 "cntlid": 93, 00:33:04.014 "qid": 0, 00:33:04.014 "state": "enabled", 00:33:04.014 "listen_address": { 00:33:04.014 "trtype": "TCP", 00:33:04.014 "adrfam": "IPv4", 00:33:04.014 "traddr": "10.0.0.2", 00:33:04.014 "trsvcid": "4420" 00:33:04.014 }, 00:33:04.014 "peer_address": { 00:33:04.014 "trtype": "TCP", 00:33:04.014 "adrfam": "IPv4", 00:33:04.014 "traddr": "10.0.0.1", 00:33:04.014 "trsvcid": "37478" 00:33:04.014 }, 00:33:04.014 "auth": { 00:33:04.014 "state": "completed", 00:33:04.014 "digest": "sha384", 00:33:04.014 "dhgroup": "ffdhe8192" 00:33:04.014 } 00:33:04.014 } 00:33:04.014 ]' 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:04.014 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:04.271 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:04.271 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:04.271 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:04.271 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:04.271 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:04.530 16:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:05.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:05.462 16:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:05.720 16:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:06.652 00:33:06.652 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:06.652 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:06.652 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:06.909 { 00:33:06.909 "cntlid": 95, 00:33:06.909 "qid": 0, 00:33:06.909 "state": "enabled", 00:33:06.909 "listen_address": { 00:33:06.909 "trtype": "TCP", 00:33:06.909 "adrfam": "IPv4", 00:33:06.909 "traddr": "10.0.0.2", 00:33:06.909 "trsvcid": "4420" 00:33:06.909 }, 00:33:06.909 "peer_address": { 00:33:06.909 "trtype": "TCP", 00:33:06.909 "adrfam": "IPv4", 00:33:06.909 "traddr": "10.0.0.1", 00:33:06.909 "trsvcid": "37504" 00:33:06.909 }, 00:33:06.909 "auth": { 00:33:06.909 "state": "completed", 00:33:06.909 "digest": "sha384", 00:33:06.909 "dhgroup": "ffdhe8192" 00:33:06.909 } 00:33:06.909 } 00:33:06.909 ]' 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:06.909 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:07.166 16:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:08.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:08.098 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.356 16:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.613 00:33:08.613 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:08.613 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:08.613 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:08.871 { 00:33:08.871 "cntlid": 97, 00:33:08.871 "qid": 0, 00:33:08.871 "state": "enabled", 00:33:08.871 "listen_address": { 00:33:08.871 "trtype": "TCP", 00:33:08.871 "adrfam": "IPv4", 00:33:08.871 "traddr": "10.0.0.2", 00:33:08.871 "trsvcid": "4420" 00:33:08.871 }, 00:33:08.871 "peer_address": { 00:33:08.871 "trtype": "TCP", 00:33:08.871 "adrfam": "IPv4", 00:33:08.871 "traddr": "10.0.0.1", 00:33:08.871 "trsvcid": "56088" 00:33:08.871 }, 00:33:08.871 "auth": { 00:33:08.871 "state": "completed", 00:33:08.871 "digest": "sha512", 00:33:08.871 "dhgroup": "null" 00:33:08.871 } 00:33:08.871 } 00:33:08.871 ]' 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:08.871 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:09.129 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:09.129 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:09.129 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:09.129 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:09.386 16:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:10.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:10.323 16:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:10.580 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.581 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.838 00:33:10.838 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:10.838 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:10.838 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:11.096 { 00:33:11.096 "cntlid": 99, 00:33:11.096 "qid": 0, 00:33:11.096 "state": "enabled", 00:33:11.096 "listen_address": { 00:33:11.096 "trtype": "TCP", 00:33:11.096 "adrfam": "IPv4", 00:33:11.096 "traddr": "10.0.0.2", 00:33:11.096 "trsvcid": "4420" 00:33:11.096 }, 00:33:11.096 "peer_address": { 00:33:11.096 "trtype": "TCP", 00:33:11.096 "adrfam": "IPv4", 00:33:11.096 "traddr": "10.0.0.1", 00:33:11.096 "trsvcid": "56118" 00:33:11.096 }, 00:33:11.096 "auth": { 00:33:11.096 "state": "completed", 00:33:11.096 "digest": "sha512", 00:33:11.096 "dhgroup": "null" 00:33:11.096 } 00:33:11.096 } 00:33:11.096 ]' 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:11.096 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:11.354 16:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:12.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:12.288 16:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.854 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.112 00:33:13.112 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:13.112 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:13.112 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:13.370 { 00:33:13.370 "cntlid": 101, 00:33:13.370 "qid": 0, 00:33:13.370 "state": "enabled", 00:33:13.370 "listen_address": { 00:33:13.370 "trtype": "TCP", 00:33:13.370 "adrfam": "IPv4", 00:33:13.370 "traddr": "10.0.0.2", 00:33:13.370 "trsvcid": "4420" 00:33:13.370 }, 00:33:13.370 "peer_address": { 00:33:13.370 "trtype": "TCP", 00:33:13.370 "adrfam": "IPv4", 00:33:13.370 "traddr": "10.0.0.1", 00:33:13.370 "trsvcid": "56152" 00:33:13.370 }, 00:33:13.370 "auth": { 00:33:13.370 "state": "completed", 00:33:13.370 "digest": "sha512", 00:33:13.370 "dhgroup": "null" 00:33:13.370 } 00:33:13.370 } 00:33:13.370 ]' 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:13.370 16:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:13.628 16:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:14.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:14.562 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:14.820 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:15.386 00:33:15.386 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:15.386 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:15.386 16:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:15.386 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.386 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:15.387 16:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.387 16:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.387 16:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.387 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:15.387 { 00:33:15.387 "cntlid": 103, 00:33:15.387 "qid": 0, 00:33:15.387 "state": "enabled", 00:33:15.387 "listen_address": { 00:33:15.387 "trtype": "TCP", 00:33:15.387 "adrfam": "IPv4", 00:33:15.387 "traddr": "10.0.0.2", 00:33:15.387 "trsvcid": "4420" 00:33:15.387 }, 00:33:15.387 "peer_address": { 00:33:15.387 "trtype": "TCP", 00:33:15.387 "adrfam": "IPv4", 00:33:15.387 "traddr": "10.0.0.1", 00:33:15.387 "trsvcid": "56170" 00:33:15.387 }, 00:33:15.387 "auth": { 00:33:15.387 "state": "completed", 00:33:15.387 "digest": "sha512", 00:33:15.387 "dhgroup": "null" 00:33:15.387 } 00:33:15.387 } 00:33:15.387 ]' 00:33:15.387 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:15.645 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:15.903 16:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:16.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:16.837 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.095 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.354 00:33:17.354 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:17.354 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:17.354 16:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:17.612 { 00:33:17.612 "cntlid": 105, 00:33:17.612 "qid": 0, 00:33:17.612 "state": "enabled", 00:33:17.612 "listen_address": { 00:33:17.612 "trtype": "TCP", 00:33:17.612 "adrfam": "IPv4", 00:33:17.612 "traddr": "10.0.0.2", 00:33:17.612 "trsvcid": "4420" 00:33:17.612 }, 00:33:17.612 "peer_address": { 00:33:17.612 "trtype": "TCP", 00:33:17.612 "adrfam": "IPv4", 00:33:17.612 "traddr": "10.0.0.1", 00:33:17.612 "trsvcid": "44554" 00:33:17.612 }, 00:33:17.612 "auth": { 00:33:17.612 "state": "completed", 00:33:17.612 "digest": "sha512", 00:33:17.612 "dhgroup": "ffdhe2048" 00:33:17.612 } 00:33:17.612 } 00:33:17.612 ]' 00:33:17.612 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:17.870 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:18.128 16:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:19.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.062 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.320 16:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.579 00:33:19.579 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:19.579 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:19.579 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:19.837 { 00:33:19.837 "cntlid": 107, 00:33:19.837 "qid": 0, 00:33:19.837 "state": "enabled", 00:33:19.837 "listen_address": { 00:33:19.837 "trtype": "TCP", 00:33:19.837 "adrfam": "IPv4", 00:33:19.837 "traddr": "10.0.0.2", 00:33:19.837 "trsvcid": "4420" 00:33:19.837 }, 00:33:19.837 "peer_address": { 00:33:19.837 "trtype": "TCP", 00:33:19.837 "adrfam": "IPv4", 00:33:19.837 "traddr": "10.0.0.1", 00:33:19.837 "trsvcid": "44590" 00:33:19.837 }, 00:33:19.837 "auth": { 00:33:19.837 "state": "completed", 00:33:19.837 "digest": "sha512", 00:33:19.837 "dhgroup": "ffdhe2048" 00:33:19.837 } 00:33:19.837 } 00:33:19.837 ]' 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:19.837 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:20.095 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:20.095 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:20.095 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:20.095 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:20.095 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:20.353 16:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:21.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.288 16:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.547 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.805 00:33:21.805 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:21.805 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:21.805 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:22.063 { 00:33:22.063 "cntlid": 109, 00:33:22.063 "qid": 0, 00:33:22.063 "state": "enabled", 00:33:22.063 "listen_address": { 00:33:22.063 "trtype": "TCP", 00:33:22.063 "adrfam": "IPv4", 00:33:22.063 "traddr": "10.0.0.2", 00:33:22.063 "trsvcid": "4420" 00:33:22.063 }, 00:33:22.063 "peer_address": { 00:33:22.063 "trtype": "TCP", 00:33:22.063 "adrfam": "IPv4", 00:33:22.063 "traddr": "10.0.0.1", 00:33:22.063 "trsvcid": "44618" 00:33:22.063 }, 00:33:22.063 "auth": { 00:33:22.063 "state": "completed", 00:33:22.063 "digest": "sha512", 00:33:22.063 "dhgroup": "ffdhe2048" 00:33:22.063 } 00:33:22.063 } 00:33:22.063 ]' 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:22.063 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:22.321 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:22.321 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:22.321 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:22.321 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:22.321 16:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:22.580 16:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:23.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:23.514 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:23.771 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:33:23.771 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:23.771 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:23.772 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:24.338 00:33:24.338 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:24.338 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:24.338 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:24.338 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.596 16:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:24.596 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.596 16:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:24.596 { 00:33:24.596 "cntlid": 111, 00:33:24.596 "qid": 0, 00:33:24.596 "state": "enabled", 00:33:24.596 "listen_address": { 00:33:24.596 "trtype": "TCP", 00:33:24.596 "adrfam": "IPv4", 00:33:24.596 "traddr": "10.0.0.2", 00:33:24.596 "trsvcid": "4420" 00:33:24.596 }, 00:33:24.596 "peer_address": { 00:33:24.596 "trtype": "TCP", 00:33:24.596 "adrfam": "IPv4", 00:33:24.596 "traddr": "10.0.0.1", 00:33:24.596 "trsvcid": "44648" 00:33:24.596 }, 00:33:24.596 "auth": { 00:33:24.596 "state": "completed", 00:33:24.596 "digest": "sha512", 00:33:24.596 "dhgroup": "ffdhe2048" 00:33:24.596 } 00:33:24.596 } 00:33:24.596 ]' 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:24.596 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:24.854 16:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:25.787 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:25.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:25.787 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:25.787 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.787 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.046 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.304 16:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.304 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.304 16:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.562 00:33:26.562 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:26.562 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:26.562 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:26.821 { 00:33:26.821 "cntlid": 113, 00:33:26.821 "qid": 0, 00:33:26.821 "state": "enabled", 00:33:26.821 "listen_address": { 00:33:26.821 "trtype": "TCP", 00:33:26.821 "adrfam": "IPv4", 00:33:26.821 "traddr": "10.0.0.2", 00:33:26.821 "trsvcid": "4420" 00:33:26.821 }, 00:33:26.821 "peer_address": { 00:33:26.821 "trtype": "TCP", 00:33:26.821 "adrfam": "IPv4", 00:33:26.821 "traddr": "10.0.0.1", 00:33:26.821 "trsvcid": "44664" 00:33:26.821 }, 00:33:26.821 "auth": { 00:33:26.821 "state": "completed", 00:33:26.821 "digest": "sha512", 00:33:26.821 "dhgroup": "ffdhe3072" 00:33:26.821 } 00:33:26.821 } 00:33:26.821 ]' 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:26.821 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:27.078 16:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:28.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:28.012 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:28.579 16:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:28.837 00:33:28.837 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:28.837 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:28.838 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:29.096 { 00:33:29.096 "cntlid": 115, 00:33:29.096 "qid": 0, 00:33:29.096 "state": "enabled", 00:33:29.096 "listen_address": { 00:33:29.096 "trtype": "TCP", 00:33:29.096 "adrfam": "IPv4", 00:33:29.096 "traddr": "10.0.0.2", 00:33:29.096 "trsvcid": "4420" 00:33:29.096 }, 00:33:29.096 "peer_address": { 00:33:29.096 "trtype": "TCP", 00:33:29.096 "adrfam": "IPv4", 00:33:29.096 "traddr": "10.0.0.1", 00:33:29.096 "trsvcid": "39452" 00:33:29.096 }, 00:33:29.096 "auth": { 00:33:29.096 "state": "completed", 00:33:29.096 "digest": "sha512", 00:33:29.096 "dhgroup": "ffdhe3072" 00:33:29.096 } 00:33:29.096 } 00:33:29.096 ]' 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:29.096 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:29.354 16:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:30.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:30.288 16:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.546 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:31.112 00:33:31.112 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:31.112 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:31.112 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:31.370 { 00:33:31.370 "cntlid": 117, 00:33:31.370 "qid": 0, 00:33:31.370 "state": "enabled", 00:33:31.370 "listen_address": { 00:33:31.370 "trtype": "TCP", 00:33:31.370 "adrfam": "IPv4", 00:33:31.370 "traddr": "10.0.0.2", 00:33:31.370 "trsvcid": "4420" 00:33:31.370 }, 00:33:31.370 "peer_address": { 00:33:31.370 "trtype": "TCP", 00:33:31.370 "adrfam": "IPv4", 00:33:31.370 "traddr": "10.0.0.1", 00:33:31.370 "trsvcid": "39472" 00:33:31.370 }, 00:33:31.370 "auth": { 00:33:31.370 "state": "completed", 00:33:31.370 "digest": "sha512", 00:33:31.370 "dhgroup": "ffdhe3072" 00:33:31.370 } 00:33:31.370 } 00:33:31.370 ]' 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:31.370 16:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:31.629 16:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:32.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.562 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:32.820 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:33.078 00:33:33.078 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:33.078 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:33.078 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:33.336 { 00:33:33.336 "cntlid": 119, 00:33:33.336 "qid": 0, 00:33:33.336 "state": "enabled", 00:33:33.336 "listen_address": { 00:33:33.336 "trtype": "TCP", 00:33:33.336 "adrfam": "IPv4", 00:33:33.336 "traddr": "10.0.0.2", 00:33:33.336 "trsvcid": "4420" 00:33:33.336 }, 00:33:33.336 "peer_address": { 00:33:33.336 "trtype": "TCP", 00:33:33.336 "adrfam": "IPv4", 00:33:33.336 "traddr": "10.0.0.1", 00:33:33.336 "trsvcid": "39504" 00:33:33.336 }, 00:33:33.336 "auth": { 00:33:33.336 "state": "completed", 00:33:33.336 "digest": "sha512", 00:33:33.336 "dhgroup": "ffdhe3072" 00:33:33.336 } 00:33:33.336 } 00:33:33.336 ]' 00:33:33.336 16:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:33.594 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:33.852 16:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:34.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:34.786 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.044 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.302 00:33:35.302 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:35.302 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:35.302 16:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:35.560 { 00:33:35.560 "cntlid": 121, 00:33:35.560 "qid": 0, 00:33:35.560 "state": "enabled", 00:33:35.560 "listen_address": { 00:33:35.560 "trtype": "TCP", 00:33:35.560 "adrfam": "IPv4", 00:33:35.560 "traddr": "10.0.0.2", 00:33:35.560 "trsvcid": "4420" 00:33:35.560 }, 00:33:35.560 "peer_address": { 00:33:35.560 "trtype": "TCP", 00:33:35.560 "adrfam": "IPv4", 00:33:35.560 "traddr": "10.0.0.1", 00:33:35.560 "trsvcid": "39536" 00:33:35.560 }, 00:33:35.560 "auth": { 00:33:35.560 "state": "completed", 00:33:35.560 "digest": "sha512", 00:33:35.560 "dhgroup": "ffdhe4096" 00:33:35.560 } 00:33:35.560 } 00:33:35.560 ]' 00:33:35.560 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:35.818 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:36.076 16:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:37.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:37.010 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.268 16:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.833 00:33:37.833 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:37.833 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:37.833 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:38.090 { 00:33:38.090 "cntlid": 123, 00:33:38.090 "qid": 0, 00:33:38.090 "state": "enabled", 00:33:38.090 "listen_address": { 00:33:38.090 "trtype": "TCP", 00:33:38.090 "adrfam": "IPv4", 00:33:38.090 "traddr": "10.0.0.2", 00:33:38.090 "trsvcid": "4420" 00:33:38.090 }, 00:33:38.090 "peer_address": { 00:33:38.090 "trtype": "TCP", 00:33:38.090 "adrfam": "IPv4", 00:33:38.090 "traddr": "10.0.0.1", 00:33:38.090 "trsvcid": "52902" 00:33:38.090 }, 00:33:38.090 "auth": { 00:33:38.090 "state": "completed", 00:33:38.090 "digest": "sha512", 00:33:38.090 "dhgroup": "ffdhe4096" 00:33:38.090 } 00:33:38.090 } 00:33:38.090 ]' 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:38.090 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:38.091 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:38.347 16:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:39.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:39.328 16:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.593 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.851 00:33:39.851 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:39.851 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:39.851 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:40.109 { 00:33:40.109 "cntlid": 125, 00:33:40.109 "qid": 0, 00:33:40.109 "state": "enabled", 00:33:40.109 "listen_address": { 00:33:40.109 "trtype": "TCP", 00:33:40.109 "adrfam": "IPv4", 00:33:40.109 "traddr": "10.0.0.2", 00:33:40.109 "trsvcid": "4420" 00:33:40.109 }, 00:33:40.109 "peer_address": { 00:33:40.109 "trtype": "TCP", 00:33:40.109 "adrfam": "IPv4", 00:33:40.109 "traddr": "10.0.0.1", 00:33:40.109 "trsvcid": "52930" 00:33:40.109 }, 00:33:40.109 "auth": { 00:33:40.109 "state": "completed", 00:33:40.109 "digest": "sha512", 00:33:40.109 "dhgroup": "ffdhe4096" 00:33:40.109 } 00:33:40.109 } 00:33:40.109 ]' 00:33:40.109 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:40.367 16:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:40.625 16:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:41.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:41.558 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:41.814 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:33:41.814 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:41.814 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:41.815 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:42.071 00:33:42.071 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:42.071 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:42.071 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:42.328 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.328 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:42.328 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.328 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.585 16:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.585 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:42.585 { 00:33:42.585 "cntlid": 127, 00:33:42.585 "qid": 0, 00:33:42.585 "state": "enabled", 00:33:42.585 "listen_address": { 00:33:42.585 "trtype": "TCP", 00:33:42.585 "adrfam": "IPv4", 00:33:42.585 "traddr": "10.0.0.2", 00:33:42.585 "trsvcid": "4420" 00:33:42.585 }, 00:33:42.585 "peer_address": { 00:33:42.585 "trtype": "TCP", 00:33:42.585 "adrfam": "IPv4", 00:33:42.585 "traddr": "10.0.0.1", 00:33:42.585 "trsvcid": "52964" 00:33:42.585 }, 00:33:42.585 "auth": { 00:33:42.585 "state": "completed", 00:33:42.585 "digest": "sha512", 00:33:42.585 "dhgroup": "ffdhe4096" 00:33:42.585 } 00:33:42.585 } 00:33:42.585 ]' 00:33:42.585 16:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:42.585 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:42.842 16:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:43.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:43.774 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:44.031 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:33:44.031 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.032 16:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.597 00:33:44.597 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:44.597 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:44.597 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:44.856 { 00:33:44.856 "cntlid": 129, 00:33:44.856 "qid": 0, 00:33:44.856 "state": "enabled", 00:33:44.856 "listen_address": { 00:33:44.856 "trtype": "TCP", 00:33:44.856 "adrfam": "IPv4", 00:33:44.856 "traddr": "10.0.0.2", 00:33:44.856 "trsvcid": "4420" 00:33:44.856 }, 00:33:44.856 "peer_address": { 00:33:44.856 "trtype": "TCP", 00:33:44.856 "adrfam": "IPv4", 00:33:44.856 "traddr": "10.0.0.1", 00:33:44.856 "trsvcid": "52994" 00:33:44.856 }, 00:33:44.856 "auth": { 00:33:44.856 "state": "completed", 00:33:44.856 "digest": "sha512", 00:33:44.856 "dhgroup": "ffdhe6144" 00:33:44.856 } 00:33:44.856 } 00:33:44.856 ]' 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:44.856 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:45.114 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:45.114 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:45.114 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:45.371 16:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:46.305 16:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:46.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:46.306 16:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.564 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:47.129 00:33:47.129 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:47.129 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:47.129 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:47.387 { 00:33:47.387 "cntlid": 131, 00:33:47.387 "qid": 0, 00:33:47.387 "state": "enabled", 00:33:47.387 "listen_address": { 00:33:47.387 "trtype": "TCP", 00:33:47.387 "adrfam": "IPv4", 00:33:47.387 "traddr": "10.0.0.2", 00:33:47.387 "trsvcid": "4420" 00:33:47.387 }, 00:33:47.387 "peer_address": { 00:33:47.387 "trtype": "TCP", 00:33:47.387 "adrfam": "IPv4", 00:33:47.387 "traddr": "10.0.0.1", 00:33:47.387 "trsvcid": "53032" 00:33:47.387 }, 00:33:47.387 "auth": { 00:33:47.387 "state": "completed", 00:33:47.387 "digest": "sha512", 00:33:47.387 "dhgroup": "ffdhe6144" 00:33:47.387 } 00:33:47.387 } 00:33:47.387 ]' 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:47.387 16:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:47.645 16:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:47.645 16:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:47.645 16:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:47.645 16:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:49.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.018 16:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.584 00:33:49.584 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:49.584 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:49.584 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:49.841 { 00:33:49.841 "cntlid": 133, 00:33:49.841 "qid": 0, 00:33:49.841 "state": "enabled", 00:33:49.841 "listen_address": { 00:33:49.841 "trtype": "TCP", 00:33:49.841 "adrfam": "IPv4", 00:33:49.841 "traddr": "10.0.0.2", 00:33:49.841 "trsvcid": "4420" 00:33:49.841 }, 00:33:49.841 "peer_address": { 00:33:49.841 "trtype": "TCP", 00:33:49.841 "adrfam": "IPv4", 00:33:49.841 "traddr": "10.0.0.1", 00:33:49.841 "trsvcid": "52124" 00:33:49.841 }, 00:33:49.841 "auth": { 00:33:49.841 "state": "completed", 00:33:49.841 "digest": "sha512", 00:33:49.841 "dhgroup": "ffdhe6144" 00:33:49.841 } 00:33:49.841 } 00:33:49.841 ]' 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:49.841 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:50.099 16:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:51.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:51.473 16:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:51.473 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:52.040 00:33:52.040 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:52.040 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:52.040 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:52.298 { 00:33:52.298 "cntlid": 135, 00:33:52.298 "qid": 0, 00:33:52.298 "state": "enabled", 00:33:52.298 "listen_address": { 00:33:52.298 "trtype": "TCP", 00:33:52.298 "adrfam": "IPv4", 00:33:52.298 "traddr": "10.0.0.2", 00:33:52.298 "trsvcid": "4420" 00:33:52.298 }, 00:33:52.298 "peer_address": { 00:33:52.298 "trtype": "TCP", 00:33:52.298 "adrfam": "IPv4", 00:33:52.298 "traddr": "10.0.0.1", 00:33:52.298 "trsvcid": "52148" 00:33:52.298 }, 00:33:52.298 "auth": { 00:33:52.298 "state": "completed", 00:33:52.298 "digest": "sha512", 00:33:52.298 "dhgroup": "ffdhe6144" 00:33:52.298 } 00:33:52.298 } 00:33:52.298 ]' 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:52.298 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:52.556 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:52.556 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:52.556 16:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:52.814 16:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:53.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:53.747 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.006 16:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.939 00:33:54.939 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:54.939 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:54.939 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:55.197 { 00:33:55.197 "cntlid": 137, 00:33:55.197 "qid": 0, 00:33:55.197 "state": "enabled", 00:33:55.197 "listen_address": { 00:33:55.197 "trtype": "TCP", 00:33:55.197 "adrfam": "IPv4", 00:33:55.197 "traddr": "10.0.0.2", 00:33:55.197 "trsvcid": "4420" 00:33:55.197 }, 00:33:55.197 "peer_address": { 00:33:55.197 "trtype": "TCP", 00:33:55.197 "adrfam": "IPv4", 00:33:55.197 "traddr": "10.0.0.1", 00:33:55.197 "trsvcid": "52186" 00:33:55.197 }, 00:33:55.197 "auth": { 00:33:55.197 "state": "completed", 00:33:55.197 "digest": "sha512", 00:33:55.197 "dhgroup": "ffdhe8192" 00:33:55.197 } 00:33:55.197 } 00:33:55.197 ]' 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:55.197 16:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:55.455 16:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:33:56.388 16:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:56.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:56.388 16:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:56.388 16:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.388 16:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.388 16:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.388 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:56.388 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:56.388 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.646 16:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.580 00:33:57.580 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:57.580 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:57.580 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.838 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:57.838 { 00:33:57.838 "cntlid": 139, 00:33:57.838 "qid": 0, 00:33:57.838 "state": "enabled", 00:33:57.838 "listen_address": { 00:33:57.838 "trtype": "TCP", 00:33:57.838 "adrfam": "IPv4", 00:33:57.839 "traddr": "10.0.0.2", 00:33:57.839 "trsvcid": "4420" 00:33:57.839 }, 00:33:57.839 "peer_address": { 00:33:57.839 "trtype": "TCP", 00:33:57.839 "adrfam": "IPv4", 00:33:57.839 "traddr": "10.0.0.1", 00:33:57.839 "trsvcid": "41288" 00:33:57.839 }, 00:33:57.839 "auth": { 00:33:57.839 "state": "completed", 00:33:57.839 "digest": "sha512", 00:33:57.839 "dhgroup": "ffdhe8192" 00:33:57.839 } 00:33:57.839 } 00:33:57.839 ]' 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:57.839 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:58.096 16:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:01:ZjdiYjI1ZGJiMTc2MTUzYmMyOTNlZjQyZWVmZjY4Njdc2L24: --dhchap-ctrl-secret DHHC-1:02:NjQ3NDAxOWNkMGQxOGMwMTFhZGQ2ZWZjMzUwYzQyMjZlYTA5MWE3NGNiZDQ1MDc3xNU4FQ==: 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:59.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.470 16:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.404 00:34:00.404 16:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:00.404 16:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:00.404 16:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:00.663 { 00:34:00.663 "cntlid": 141, 00:34:00.663 "qid": 0, 00:34:00.663 "state": "enabled", 00:34:00.663 "listen_address": { 00:34:00.663 "trtype": "TCP", 00:34:00.663 "adrfam": "IPv4", 00:34:00.663 "traddr": "10.0.0.2", 00:34:00.663 "trsvcid": "4420" 00:34:00.663 }, 00:34:00.663 "peer_address": { 00:34:00.663 "trtype": "TCP", 00:34:00.663 "adrfam": "IPv4", 00:34:00.663 "traddr": "10.0.0.1", 00:34:00.663 "trsvcid": "41322" 00:34:00.663 }, 00:34:00.663 "auth": { 00:34:00.663 "state": "completed", 00:34:00.663 "digest": "sha512", 00:34:00.663 "dhgroup": "ffdhe8192" 00:34:00.663 } 00:34:00.663 } 00:34:00.663 ]' 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:00.663 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:00.921 16:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:02:MzQ3ZGI0ZDQzYzUyNzVkNTIyOGY2NjhjNmVjYTFlZDM3ZDI3NmQ2ZTM0ZTVlMzNjlc1mYw==: --dhchap-ctrl-secret DHHC-1:01:NjgzZGUyN2NjYzJkNzhmOTU4YWZmMTBhNmE5NjM5MzMvJ7up: 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:02.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:02.295 16:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:03.227 00:34:03.227 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:03.227 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:03.227 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.484 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:03.484 { 00:34:03.484 "cntlid": 143, 00:34:03.484 "qid": 0, 00:34:03.484 "state": "enabled", 00:34:03.484 "listen_address": { 00:34:03.484 "trtype": "TCP", 00:34:03.484 "adrfam": "IPv4", 00:34:03.484 "traddr": "10.0.0.2", 00:34:03.484 "trsvcid": "4420" 00:34:03.484 }, 00:34:03.484 "peer_address": { 00:34:03.484 "trtype": "TCP", 00:34:03.484 "adrfam": "IPv4", 00:34:03.484 "traddr": "10.0.0.1", 00:34:03.485 "trsvcid": "41350" 00:34:03.485 }, 00:34:03.485 "auth": { 00:34:03.485 "state": "completed", 00:34:03.485 "digest": "sha512", 00:34:03.485 "dhgroup": "ffdhe8192" 00:34:03.485 } 00:34:03.485 } 00:34:03.485 ]' 00:34:03.485 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:03.485 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:03.485 16:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:03.485 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:03.485 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:03.485 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:03.485 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:03.485 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:03.742 16:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:04.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.674 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.932 16:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.864 00:34:05.864 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:05.864 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:05.864 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:06.122 { 00:34:06.122 "cntlid": 145, 00:34:06.122 "qid": 0, 00:34:06.122 "state": "enabled", 00:34:06.122 "listen_address": { 00:34:06.122 "trtype": "TCP", 00:34:06.122 "adrfam": "IPv4", 00:34:06.122 "traddr": "10.0.0.2", 00:34:06.122 "trsvcid": "4420" 00:34:06.122 }, 00:34:06.122 "peer_address": { 00:34:06.122 "trtype": "TCP", 00:34:06.122 "adrfam": "IPv4", 00:34:06.122 "traddr": "10.0.0.1", 00:34:06.122 "trsvcid": "41376" 00:34:06.122 }, 00:34:06.122 "auth": { 00:34:06.122 "state": "completed", 00:34:06.122 "digest": "sha512", 00:34:06.122 "dhgroup": "ffdhe8192" 00:34:06.122 } 00:34:06.122 } 00:34:06.122 ]' 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:06.122 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:06.380 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:06.380 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:06.380 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:06.380 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:06.380 16:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:06.638 16:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:00:MDFmMTg4NmNmMTA3MmUzYjc0MmY5MmI0MmM3NDUzN2UxOGE0YzA5MDI1OWI4MTk2+k6mTw==: --dhchap-ctrl-secret DHHC-1:03:NmIzOTg2Yzc4NTA3ZTUwYmU1Y2JlMjAwNThmNTg5MDY4NjI4MGM2OTNmNDM4MGY5ZGMzOWMxNWZiMzkyZTNiYzpt/R4=: 00:34:07.571 16:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:07.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:07.571 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:08.525 request: 00:34:08.525 { 00:34:08.525 "name": "nvme0", 00:34:08.525 "trtype": "tcp", 00:34:08.525 "traddr": "10.0.0.2", 00:34:08.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:08.525 "adrfam": "ipv4", 00:34:08.525 "trsvcid": "4420", 00:34:08.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:08.525 "dhchap_key": "key2", 00:34:08.526 "method": "bdev_nvme_attach_controller", 00:34:08.526 "req_id": 1 00:34:08.526 } 00:34:08.526 Got JSON-RPC error response 00:34:08.526 response: 00:34:08.526 { 00:34:08.526 "code": -5, 00:34:08.526 "message": "Input/output error" 00:34:08.526 } 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:08.526 16:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:09.114 request: 00:34:09.114 { 00:34:09.114 "name": "nvme0", 00:34:09.114 "trtype": "tcp", 00:34:09.114 "traddr": "10.0.0.2", 00:34:09.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:09.114 "adrfam": "ipv4", 00:34:09.114 "trsvcid": "4420", 00:34:09.114 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:09.114 "dhchap_key": "key1", 00:34:09.114 "dhchap_ctrlr_key": "ckey2", 00:34:09.114 "method": "bdev_nvme_attach_controller", 00:34:09.114 "req_id": 1 00:34:09.114 } 00:34:09.114 Got JSON-RPC error response 00:34:09.114 response: 00:34:09.114 { 00:34:09.114 "code": -5, 00:34:09.114 "message": "Input/output error" 00:34:09.114 } 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.114 16:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.115 16:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.048 request: 00:34:10.048 { 00:34:10.048 "name": "nvme0", 00:34:10.048 "trtype": "tcp", 00:34:10.048 "traddr": "10.0.0.2", 00:34:10.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:10.048 "adrfam": "ipv4", 00:34:10.048 "trsvcid": "4420", 00:34:10.048 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:10.048 "dhchap_key": "key1", 00:34:10.048 "dhchap_ctrlr_key": "ckey1", 00:34:10.048 "method": "bdev_nvme_attach_controller", 00:34:10.048 "req_id": 1 00:34:10.048 } 00:34:10.048 Got JSON-RPC error response 00:34:10.048 response: 00:34:10.048 { 00:34:10.048 "code": -5, 00:34:10.048 "message": "Input/output error" 00:34:10.048 } 00:34:10.048 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:10.048 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2806108 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2806108 ']' 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2806108 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2806108 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2806108' 00:34:10.049 killing process with pid 2806108 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2806108 00:34:10.049 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2806108 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2828628 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2828628 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2828628 ']' 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:10.307 16:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2828628 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2828628 ']' 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:10.565 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.822 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:10.822 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:34:10.822 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:34:10.823 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.823 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:11.079 16:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:12.009 00:34:12.009 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:12.009 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:12.010 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:12.267 { 00:34:12.267 "cntlid": 1, 00:34:12.267 "qid": 0, 00:34:12.267 "state": "enabled", 00:34:12.267 "listen_address": { 00:34:12.267 "trtype": "TCP", 00:34:12.267 "adrfam": "IPv4", 00:34:12.267 "traddr": "10.0.0.2", 00:34:12.267 "trsvcid": "4420" 00:34:12.267 }, 00:34:12.267 "peer_address": { 00:34:12.267 "trtype": "TCP", 00:34:12.267 "adrfam": "IPv4", 00:34:12.267 "traddr": "10.0.0.1", 00:34:12.267 "trsvcid": "57614" 00:34:12.267 }, 00:34:12.267 "auth": { 00:34:12.267 "state": "completed", 00:34:12.267 "digest": "sha512", 00:34:12.267 "dhgroup": "ffdhe8192" 00:34:12.267 } 00:34:12.267 } 00:34:12.267 ]' 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:12.267 16:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:12.525 16:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd --dhchap-secret DHHC-1:03:OTQ1ZTZkMDliNjY0OWI4OGY2ZDJkOGZkMzIzYjBjOTA4MmRmNTAyNzY2ODVmMGRkMWZlZjY5ODg4MWQ2MmNmZGeCP/c=: 00:34:13.456 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:13.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.457 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:13.714 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:13.971 request: 00:34:13.971 { 00:34:13.971 "name": "nvme0", 00:34:13.971 "trtype": "tcp", 00:34:13.971 "traddr": "10.0.0.2", 00:34:13.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:13.971 "adrfam": "ipv4", 00:34:13.971 "trsvcid": "4420", 00:34:13.971 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:13.971 "dhchap_key": "key3", 00:34:13.971 "method": "bdev_nvme_attach_controller", 00:34:13.971 "req_id": 1 00:34:13.971 } 00:34:13.971 Got JSON-RPC error response 00:34:13.971 response: 00:34:13.971 { 00:34:13.971 "code": -5, 00:34:13.971 "message": "Input/output error" 00:34:13.971 } 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:13.971 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:14.229 16:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:14.487 request: 00:34:14.487 { 00:34:14.487 "name": "nvme0", 00:34:14.487 "trtype": "tcp", 00:34:14.487 "traddr": "10.0.0.2", 00:34:14.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:14.487 "adrfam": "ipv4", 00:34:14.487 "trsvcid": "4420", 00:34:14.487 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:14.487 "dhchap_key": "key3", 00:34:14.487 "method": "bdev_nvme_attach_controller", 00:34:14.487 "req_id": 1 00:34:14.487 } 00:34:14.487 Got JSON-RPC error response 00:34:14.487 response: 00:34:14.487 { 00:34:14.487 "code": -5, 00:34:14.487 "message": "Input/output error" 00:34:14.487 } 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.487 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:14.745 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:15.002 request: 00:34:15.002 { 00:34:15.002 "name": "nvme0", 00:34:15.002 "trtype": "tcp", 00:34:15.002 "traddr": "10.0.0.2", 00:34:15.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:34:15.002 "adrfam": "ipv4", 00:34:15.002 "trsvcid": "4420", 00:34:15.002 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:15.002 "dhchap_key": "key0", 00:34:15.002 "dhchap_ctrlr_key": "key1", 00:34:15.002 "method": "bdev_nvme_attach_controller", 00:34:15.002 "req_id": 1 00:34:15.002 } 00:34:15.002 Got JSON-RPC error response 00:34:15.002 response: 00:34:15.002 { 00:34:15.002 "code": -5, 00:34:15.002 "message": "Input/output error" 00:34:15.002 } 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:15.002 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:15.567 00:34:15.567 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:34:15.567 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:34:15.567 16:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:15.567 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.567 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:15.567 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2806134 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2806134 ']' 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2806134 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:15.824 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2806134 00:34:16.081 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:16.081 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:16.081 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2806134' 00:34:16.081 killing process with pid 2806134 00:34:16.081 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2806134 00:34:16.081 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2806134 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:16.339 rmmod nvme_tcp 00:34:16.339 rmmod nvme_fabrics 00:34:16.339 rmmod nvme_keyring 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2828628 ']' 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2828628 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2828628 ']' 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2828628 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:16.339 16:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2828628 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2828628' 00:34:16.597 killing process with pid 2828628 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2828628 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2828628 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:16.597 16:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.124 16:47:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:19.124 16:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.m7T /tmp/spdk.key-sha256.G7k /tmp/spdk.key-sha384.kKz /tmp/spdk.key-sha512.QaH /tmp/spdk.key-sha512.VHv /tmp/spdk.key-sha384.kUu /tmp/spdk.key-sha256.1A6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:34:19.124 00:34:19.124 real 3m9.097s 00:34:19.124 user 7m19.396s 00:34:19.124 sys 0m25.110s 00:34:19.124 16:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:19.125 16:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.125 ************************************ 00:34:19.125 END TEST nvmf_auth_target 00:34:19.125 ************************************ 00:34:19.125 16:47:38 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:34:19.125 16:47:38 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:19.125 16:47:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:34:19.125 16:47:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:19.125 16:47:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.125 ************************************ 00:34:19.125 START TEST nvmf_bdevio_no_huge 00:34:19.125 ************************************ 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:19.125 * Looking for test storage... 00:34:19.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.125 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:34:19.126 16:47:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:21.656 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:34:21.657 Found 0000:82:00.0 (0x8086 - 0x159b) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:34:21.657 Found 0000:82:00.1 (0x8086 - 0x159b) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:34:21.657 Found net devices under 0000:82:00.0: cvl_0_0 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:34:21.657 Found net devices under 0000:82:00.1: cvl_0_1 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.657 16:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:21.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:34:21.657 00:34:21.657 --- 10.0.0.2 ping statistics --- 00:34:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.657 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:34:21.657 00:34:21.657 --- 10.0.0.1 ping statistics --- 00:34:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.657 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2831684 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2831684 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 2831684 ']' 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:21.657 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 [2024-07-22 16:47:41.092927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:21.657 [2024-07-22 16:47:41.093067] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:34:21.657 [2024-07-22 16:47:41.173138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:21.657 [2024-07-22 16:47:41.258030] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.657 [2024-07-22 16:47:41.258105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.657 [2024-07-22 16:47:41.258135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.657 [2024-07-22 16:47:41.258147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.657 [2024-07-22 16:47:41.258158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.657 [2024-07-22 16:47:41.258521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:21.657 [2024-07-22 16:47:41.258552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:21.657 [2024-07-22 16:47:41.258610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:21.657 [2024-07-22 16:47:41.258612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 [2024-07-22 16:47:41.383499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 Malloc0 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:21.916 [2024-07-22 16:47:41.423784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:21.916 { 00:34:21.916 "params": { 00:34:21.916 "name": "Nvme$subsystem", 00:34:21.916 "trtype": "$TEST_TRANSPORT", 00:34:21.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.916 "adrfam": "ipv4", 00:34:21.916 "trsvcid": "$NVMF_PORT", 00:34:21.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.916 "hdgst": ${hdgst:-false}, 00:34:21.916 "ddgst": ${ddgst:-false} 00:34:21.916 }, 00:34:21.916 "method": "bdev_nvme_attach_controller" 00:34:21.916 } 00:34:21.916 EOF 00:34:21.916 )") 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:34:21.916 16:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:21.916 "params": { 00:34:21.916 "name": "Nvme1", 00:34:21.916 "trtype": "tcp", 00:34:21.916 "traddr": "10.0.0.2", 00:34:21.916 "adrfam": "ipv4", 00:34:21.916 "trsvcid": "4420", 00:34:21.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:21.916 "hdgst": false, 00:34:21.916 "ddgst": false 00:34:21.916 }, 00:34:21.916 "method": "bdev_nvme_attach_controller" 00:34:21.916 }' 00:34:21.916 [2024-07-22 16:47:41.471245] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:21.916 [2024-07-22 16:47:41.471340] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2831716 ] 00:34:21.916 [2024-07-22 16:47:41.540690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.174 [2024-07-22 16:47:41.630890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.175 [2024-07-22 16:47:41.630939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.175 [2024-07-22 16:47:41.630942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.175 I/O targets: 00:34:22.175 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:22.175 00:34:22.175 00:34:22.175 CUnit - A unit testing framework for C - Version 2.1-3 00:34:22.175 http://cunit.sourceforge.net/ 00:34:22.175 00:34:22.175 00:34:22.175 Suite: bdevio tests on: Nvme1n1 00:34:22.433 Test: blockdev write read block ...passed 00:34:22.433 Test: blockdev write zeroes read block ...passed 00:34:22.433 Test: blockdev write zeroes read no split ...passed 00:34:22.433 Test: blockdev write zeroes read split ...passed 00:34:22.433 Test: blockdev write zeroes read split partial ...passed 00:34:22.433 Test: blockdev reset ...[2024-07-22 16:47:41.994680] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.433 [2024-07-22 16:47:41.994797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb822a0 (9): Bad file descriptor 00:34:22.433 [2024-07-22 16:47:42.006544] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:22.433 passed 00:34:22.433 Test: blockdev write read 8 blocks ...passed 00:34:22.690 Test: blockdev write read size > 128k ...passed 00:34:22.690 Test: blockdev write read invalid size ...passed 00:34:22.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:22.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:22.690 Test: blockdev write read max offset ...passed 00:34:22.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:22.690 Test: blockdev writev readv 8 blocks ...passed 00:34:22.690 Test: blockdev writev readv 30 x 1block ...passed 00:34:22.690 Test: blockdev writev readv block ...passed 00:34:22.691 Test: blockdev writev readv size > 128k ...passed 00:34:22.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:22.691 Test: blockdev comparev and writev ...[2024-07-22 16:47:42.307359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.307396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.307421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.307439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.307810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.307835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.307857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.307873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.308227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.308251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.308282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.308298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.308665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.308689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:22.691 [2024-07-22 16:47:42.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.691 [2024-07-22 16:47:42.308726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:22.949 passed 00:34:22.949 Test: blockdev nvme passthru rw ...passed 00:34:22.949 Test: blockdev nvme passthru vendor specific ...[2024-07-22 16:47:42.392362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.949 [2024-07-22 16:47:42.392388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:22.949 [2024-07-22 16:47:42.392560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.949 [2024-07-22 16:47:42.392583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:22.949 [2024-07-22 16:47:42.392751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.949 [2024-07-22 16:47:42.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:22.949 [2024-07-22 16:47:42.392942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.949 [2024-07-22 16:47:42.392972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:22.949 passed 00:34:22.949 Test: blockdev nvme admin passthru ...passed 00:34:22.949 Test: blockdev copy ...passed 00:34:22.949 00:34:22.949 Run Summary: Type Total Ran Passed Failed Inactive 00:34:22.949 suites 1 1 n/a 0 0 00:34:22.949 tests 23 23 23 0 0 00:34:22.949 asserts 152 152 152 0 n/a 00:34:22.949 00:34:22.949 Elapsed time = 1.320 seconds 00:34:23.207 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.207 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.207 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:23.207 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.207 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:23.208 rmmod nvme_tcp 00:34:23.208 rmmod nvme_fabrics 00:34:23.208 rmmod nvme_keyring 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2831684 ']' 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2831684 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 2831684 ']' 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 2831684 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:23.208 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2831684 00:34:23.466 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:34:23.466 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:34:23.466 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2831684' 00:34:23.466 killing process with pid 2831684 00:34:23.466 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 2831684 00:34:23.466 16:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 2831684 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.724 16:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.252 16:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.252 00:34:26.252 real 0m6.970s 00:34:26.252 user 0m10.630s 00:34:26.252 sys 0m2.888s 00:34:26.252 16:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.252 16:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:26.252 ************************************ 00:34:26.252 END TEST nvmf_bdevio_no_huge 00:34:26.252 ************************************ 00:34:26.252 16:47:45 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:26.252 16:47:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:26.252 16:47:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:26.252 16:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.252 ************************************ 00:34:26.252 START TEST nvmf_tls 00:34:26.252 ************************************ 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:26.252 * Looking for test storage... 00:34:26.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.252 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:34:26.253 16:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:34:28.783 Found 0000:82:00.0 (0x8086 - 0x159b) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:34:28.783 Found 0000:82:00.1 (0x8086 - 0x159b) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:34:28.783 Found net devices under 0000:82:00.0: cvl_0_0 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:34:28.783 Found net devices under 0000:82:00.1: cvl_0_1 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.783 16:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:28.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:34:28.783 00:34:28.783 --- 10.0.0.2 ping statistics --- 00:34:28.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.783 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:34:28.783 00:34:28.783 --- 10.0.0.1 ping statistics --- 00:34:28.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.783 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2834190 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:34:28.783 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2834190 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2834190 ']' 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.784 [2024-07-22 16:47:48.084941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:28.784 [2024-07-22 16:47:48.085060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.784 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.784 [2024-07-22 16:47:48.162711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.784 [2024-07-22 16:47:48.251294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.784 [2024-07-22 16:47:48.251368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.784 [2024-07-22 16:47:48.251382] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.784 [2024-07-22 16:47:48.251394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.784 [2024-07-22 16:47:48.251404] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.784 [2024-07-22 16:47:48.251442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:34:28.784 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:34:29.042 true 00:34:29.042 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:29.042 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:34:29.300 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:34:29.300 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:34:29.300 16:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:29.558 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:29.558 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:34:29.816 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:34:29.816 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:34:29.816 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:34:30.074 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:30.074 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:34:30.332 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:34:30.332 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:34:30.332 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:30.332 16:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:34:30.591 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:34:30.591 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:34:30.591 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:34:30.849 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:30.849 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:34:31.107 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:34:31.107 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:34:31.107 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:34:31.365 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:31.365 16:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ELzf0J11IK 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ZQjSJVYsLe 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ELzf0J11IK 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZQjSJVYsLe 00:34:31.623 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:32.189 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:34:32.447 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ELzf0J11IK 00:34:32.447 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ELzf0J11IK 00:34:32.447 16:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:32.705 [2024-07-22 16:47:52.120376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.705 16:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:32.963 16:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:33.221 [2024-07-22 16:47:52.689897] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:33.221 [2024-07-22 16:47:52.690145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.221 16:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:33.479 malloc0 00:34:33.479 16:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:33.737 16:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ELzf0J11IK 00:34:33.995 [2024-07-22 16:47:53.435300] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:33.995 16:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ELzf0J11IK 00:34:33.995 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.962 Initializing NVMe Controllers 00:34:43.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:43.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:43.962 Initialization complete. Launching workers. 00:34:43.962 ======================================================== 00:34:43.962 Latency(us) 00:34:43.962 Device Information : IOPS MiB/s Average min max 00:34:43.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7849.19 30.66 8156.45 1310.88 10035.76 00:34:43.962 ======================================================== 00:34:43.962 Total : 7849.19 30.66 8156.45 1310.88 10035.76 00:34:43.962 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ELzf0J11IK 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ELzf0J11IK' 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2836088 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2836088 /var/tmp/bdevperf.sock 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2836088 ']' 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:43.962 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:43.962 [2024-07-22 16:48:03.596199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:43.962 [2024-07-22 16:48:03.596288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836088 ] 00:34:44.221 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.221 [2024-07-22 16:48:03.664118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.221 [2024-07-22 16:48:03.747655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.221 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:44.221 16:48:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:44.221 16:48:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ELzf0J11IK 00:34:44.479 [2024-07-22 16:48:04.128507] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:44.479 [2024-07-22 16:48:04.128620] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:44.738 TLSTESTn1 00:34:44.738 16:48:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:34:44.738 Running I/O for 10 seconds... 00:34:56.935 00:34:56.935 Latency(us) 00:34:56.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:56.935 Verification LBA range: start 0x0 length 0x2000 00:34:56.935 TLSTESTn1 : 10.04 2870.19 11.21 0.00 0.00 44510.67 10485.76 68739.98 00:34:56.935 =================================================================================================================== 00:34:56.935 Total : 2870.19 11.21 0.00 0.00 44510.67 10485.76 68739.98 00:34:56.935 0 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2836088 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2836088 ']' 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2836088 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2836088 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2836088' 00:34:56.935 killing process with pid 2836088 00:34:56.935 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2836088 00:34:56.935 Received shutdown signal, test time was about 10.000000 seconds 00:34:56.935 00:34:56.936 Latency(us) 00:34:56.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.936 =================================================================================================================== 00:34:56.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.936 [2024-07-22 16:48:14.431557] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2836088 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZQjSJVYsLe 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZQjSJVYsLe 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZQjSJVYsLe 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZQjSJVYsLe' 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2837893 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2837893 /var/tmp/bdevperf.sock 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2837893 ']' 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:56.936 [2024-07-22 16:48:14.701658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:56.936 [2024-07-22 16:48:14.701752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837893 ] 00:34:56.936 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.936 [2024-07-22 16:48:14.768715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.936 [2024-07-22 16:48:14.850565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:56.936 16:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZQjSJVYsLe 00:34:56.936 [2024-07-22 16:48:15.174244] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:56.936 [2024-07-22 16:48:15.174380] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:56.936 [2024-07-22 16:48:15.179723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:56.936 [2024-07-22 16:48:15.180261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074840 (107): Transport endpoint is not connected 00:34:56.936 [2024-07-22 16:48:15.181237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074840 (9): Bad file descriptor 00:34:56.936 [2024-07-22 16:48:15.182236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:56.936 [2024-07-22 16:48:15.182257] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:34:56.936 [2024-07-22 16:48:15.182289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:56.936 request: 00:34:56.936 { 00:34:56.936 "name": "TLSTEST", 00:34:56.936 "trtype": "tcp", 00:34:56.936 "traddr": "10.0.0.2", 00:34:56.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.936 "adrfam": "ipv4", 00:34:56.936 "trsvcid": "4420", 00:34:56.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.936 "psk": "/tmp/tmp.ZQjSJVYsLe", 00:34:56.936 "method": "bdev_nvme_attach_controller", 00:34:56.936 "req_id": 1 00:34:56.936 } 00:34:56.936 Got JSON-RPC error response 00:34:56.936 response: 00:34:56.936 { 00:34:56.936 "code": -5, 00:34:56.936 "message": "Input/output error" 00:34:56.936 } 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2837893 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2837893 ']' 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2837893 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2837893 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2837893' 00:34:56.936 killing process with pid 2837893 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2837893 00:34:56.936 Received shutdown signal, test time was about 10.000000 seconds 00:34:56.936 00:34:56.936 Latency(us) 00:34:56.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.936 =================================================================================================================== 00:34:56.936 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:56.936 [2024-07-22 16:48:15.233191] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2837893 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ELzf0J11IK 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ELzf0J11IK 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ELzf0J11IK 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ELzf0J11IK' 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2838028 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2838028 /var/tmp/bdevperf.sock 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2838028 ']' 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:56.936 [2024-07-22 16:48:15.497601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:56.936 [2024-07-22 16:48:15.497691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838028 ] 00:34:56.936 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.936 [2024-07-22 16:48:15.565304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.936 [2024-07-22 16:48:15.650416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:56.936 16:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ELzf0J11IK 00:34:56.936 [2024-07-22 16:48:16.017973] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:56.936 [2024-07-22 16:48:16.018102] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:56.936 [2024-07-22 16:48:16.029491] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:34:56.936 [2024-07-22 16:48:16.029522] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:34:56.936 [2024-07-22 16:48:16.029563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:56.936 [2024-07-22 16:48:16.030126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc84840 (107): Transport endpoint is not connected 00:34:56.936 [2024-07-22 16:48:16.031115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc84840 (9): Bad file descriptor 00:34:56.936 [2024-07-22 16:48:16.032115] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:56.936 [2024-07-22 16:48:16.032138] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:34:56.936 [2024-07-22 16:48:16.032157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:56.936 request: 00:34:56.936 { 00:34:56.936 "name": "TLSTEST", 00:34:56.936 "trtype": "tcp", 00:34:56.936 "traddr": "10.0.0.2", 00:34:56.936 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:56.936 "adrfam": "ipv4", 00:34:56.936 "trsvcid": "4420", 00:34:56.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.936 "psk": "/tmp/tmp.ELzf0J11IK", 00:34:56.936 "method": "bdev_nvme_attach_controller", 00:34:56.936 "req_id": 1 00:34:56.936 } 00:34:56.936 Got JSON-RPC error response 00:34:56.936 response: 00:34:56.936 { 00:34:56.936 "code": -5, 00:34:56.936 "message": "Input/output error" 00:34:56.936 } 00:34:56.936 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2838028 00:34:56.936 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2838028 ']' 00:34:56.936 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2838028 00:34:56.936 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838028 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838028' 00:34:56.937 killing process with pid 2838028 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2838028 00:34:56.937 Received shutdown signal, test time was about 10.000000 seconds 00:34:56.937 00:34:56.937 Latency(us) 00:34:56.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.937 =================================================================================================================== 00:34:56.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:56.937 [2024-07-22 16:48:16.079731] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2838028 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ELzf0J11IK 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ELzf0J11IK 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ELzf0J11IK 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ELzf0J11IK' 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2838164 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2838164 /var/tmp/bdevperf.sock 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2838164 ']' 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:56.937 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:56.937 [2024-07-22 16:48:16.339365] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:56.937 [2024-07-22 16:48:16.339456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838164 ] 00:34:56.937 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.937 [2024-07-22 16:48:16.435326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.937 [2024-07-22 16:48:16.536574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.195 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:57.195 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:57.195 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ELzf0J11IK 00:34:57.453 [2024-07-22 16:48:16.954877] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:57.453 [2024-07-22 16:48:16.955019] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:57.453 [2024-07-22 16:48:16.961827] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:34:57.453 [2024-07-22 16:48:16.961857] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:34:57.453 [2024-07-22 16:48:16.961897] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:57.453 [2024-07-22 16:48:16.962122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf44840 (107): Transport endpoint is not connected 00:34:57.453 [2024-07-22 16:48:16.963111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf44840 (9): Bad file descriptor 00:34:57.453 [2024-07-22 16:48:16.964112] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:34:57.453 [2024-07-22 16:48:16.964133] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:34:57.453 [2024-07-22 16:48:16.964151] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:34:57.453 request: 00:34:57.453 { 00:34:57.453 "name": "TLSTEST", 00:34:57.453 "trtype": "tcp", 00:34:57.453 "traddr": "10.0.0.2", 00:34:57.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:57.453 "adrfam": "ipv4", 00:34:57.453 "trsvcid": "4420", 00:34:57.453 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:57.453 "psk": "/tmp/tmp.ELzf0J11IK", 00:34:57.453 "method": "bdev_nvme_attach_controller", 00:34:57.453 "req_id": 1 00:34:57.453 } 00:34:57.453 Got JSON-RPC error response 00:34:57.453 response: 00:34:57.453 { 00:34:57.453 "code": -5, 00:34:57.453 "message": "Input/output error" 00:34:57.453 } 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2838164 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2838164 ']' 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2838164 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:57.453 16:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838164 00:34:57.453 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:57.453 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:57.453 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838164' 00:34:57.453 killing process with pid 2838164 00:34:57.453 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2838164 00:34:57.453 Received shutdown signal, test time was about 10.000000 seconds 00:34:57.453 00:34:57.453 Latency(us) 00:34:57.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.453 =================================================================================================================== 00:34:57.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:57.453 [2024-07-22 16:48:17.016551] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:57.453 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2838164 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2838277 00:34:57.711 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2838277 /var/tmp/bdevperf.sock 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2838277 ']' 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:57.712 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:57.712 [2024-07-22 16:48:17.269994] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:57.712 [2024-07-22 16:48:17.270102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838277 ] 00:34:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.712 [2024-07-22 16:48:17.346278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.970 [2024-07-22 16:48:17.440313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.970 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:57.970 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:57.970 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:34:58.228 [2024-07-22 16:48:17.834230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:58.228 [2024-07-22 16:48:17.836184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214af10 (9): Bad file descriptor 00:34:58.228 [2024-07-22 16:48:17.837180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:58.228 [2024-07-22 16:48:17.837203] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:34:58.228 [2024-07-22 16:48:17.837222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:58.228 request: 00:34:58.228 { 00:34:58.228 "name": "TLSTEST", 00:34:58.228 "trtype": "tcp", 00:34:58.228 "traddr": "10.0.0.2", 00:34:58.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.228 "adrfam": "ipv4", 00:34:58.228 "trsvcid": "4420", 00:34:58.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.228 "method": "bdev_nvme_attach_controller", 00:34:58.228 "req_id": 1 00:34:58.228 } 00:34:58.228 Got JSON-RPC error response 00:34:58.228 response: 00:34:58.228 { 00:34:58.228 "code": -5, 00:34:58.228 "message": "Input/output error" 00:34:58.228 } 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2838277 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2838277 ']' 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2838277 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:58.228 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838277 00:34:58.487 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:58.487 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:58.487 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838277' 00:34:58.487 killing process with pid 2838277 00:34:58.487 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2838277 00:34:58.487 Received shutdown signal, test time was about 10.000000 seconds 00:34:58.487 00:34:58.487 Latency(us) 00:34:58.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.487 =================================================================================================================== 00:34:58.487 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:58.487 16:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2838277 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2834190 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2834190 ']' 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2834190 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:58.487 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2834190 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2834190' 00:34:58.746 killing process with pid 2834190 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2834190 00:34:58.746 [2024-07-22 16:48:18.138790] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2834190 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:34:58.746 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.eSenNS7DP3 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.eSenNS7DP3 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2838452 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2838452 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2838452 ']' 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:59.014 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:59.014 [2024-07-22 16:48:18.487201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:59.014 [2024-07-22 16:48:18.487310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.014 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.014 [2024-07-22 16:48:18.566791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.014 [2024-07-22 16:48:18.655147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.014 [2024-07-22 16:48:18.655205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.014 [2024-07-22 16:48:18.655231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.014 [2024-07-22 16:48:18.655245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.014 [2024-07-22 16:48:18.655257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.014 [2024-07-22 16:48:18.655288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eSenNS7DP3 00:34:59.280 16:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:59.538 [2024-07-22 16:48:19.061058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.538 16:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:59.796 16:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:00.054 [2024-07-22 16:48:19.550456] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:00.054 [2024-07-22 16:48:19.550710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.054 16:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:00.312 malloc0 00:35:00.312 16:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:00.570 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:00.828 [2024-07-22 16:48:20.304860] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eSenNS7DP3 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eSenNS7DP3' 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2838614 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2838614 /var/tmp/bdevperf.sock 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2838614 ']' 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:00.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:00.828 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:00.828 [2024-07-22 16:48:20.368704] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:00.828 [2024-07-22 16:48:20.368790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838614 ] 00:35:00.828 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.828 [2024-07-22 16:48:20.435439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.087 [2024-07-22 16:48:20.520695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.087 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:01.087 16:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:01.087 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:01.345 [2024-07-22 16:48:20.854159] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:01.345 [2024-07-22 16:48:20.854291] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:01.345 TLSTESTn1 00:35:01.345 16:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:01.603 Running I/O for 10 seconds... 00:35:11.668 00:35:11.668 Latency(us) 00:35:11.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:11.668 Verification LBA range: start 0x0 length 0x2000 00:35:11.668 TLSTESTn1 : 10.02 3730.83 14.57 0.00 0.00 34245.38 9806.13 37865.24 00:35:11.668 =================================================================================================================== 00:35:11.668 Total : 3730.83 14.57 0.00 0.00 34245.38 9806.13 37865.24 00:35:11.668 0 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2838614 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2838614 ']' 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2838614 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838614 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838614' 00:35:11.668 killing process with pid 2838614 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2838614 00:35:11.668 Received shutdown signal, test time was about 10.000000 seconds 00:35:11.668 00:35:11.668 Latency(us) 00:35:11.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.668 =================================================================================================================== 00:35:11.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.668 [2024-07-22 16:48:31.124640] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:11.668 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2838614 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.eSenNS7DP3 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eSenNS7DP3 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eSenNS7DP3 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eSenNS7DP3 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eSenNS7DP3' 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2839934 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2839934 /var/tmp/bdevperf.sock 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2839934 ']' 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:11.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:11.926 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:11.926 [2024-07-22 16:48:31.398125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:11.926 [2024-07-22 16:48:31.398221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839934 ] 00:35:11.926 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.926 [2024-07-22 16:48:31.465089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.926 [2024-07-22 16:48:31.548607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.185 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:12.185 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:12.185 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:12.444 [2024-07-22 16:48:31.891921] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:12.444 [2024-07-22 16:48:31.892026] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:12.444 [2024-07-22 16:48:31.892042] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.eSenNS7DP3 00:35:12.444 request: 00:35:12.444 { 00:35:12.444 "name": "TLSTEST", 00:35:12.444 "trtype": "tcp", 00:35:12.444 "traddr": "10.0.0.2", 00:35:12.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:12.444 "adrfam": "ipv4", 00:35:12.444 "trsvcid": "4420", 00:35:12.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:12.444 "psk": "/tmp/tmp.eSenNS7DP3", 00:35:12.444 "method": "bdev_nvme_attach_controller", 00:35:12.444 "req_id": 1 00:35:12.444 } 00:35:12.444 Got JSON-RPC error response 00:35:12.444 response: 00:35:12.444 { 00:35:12.444 "code": -1, 00:35:12.444 "message": "Operation not permitted" 00:35:12.444 } 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2839934 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2839934 ']' 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2839934 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2839934 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2839934' 00:35:12.444 killing process with pid 2839934 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2839934 00:35:12.444 Received shutdown signal, test time was about 10.000000 seconds 00:35:12.444 00:35:12.444 Latency(us) 00:35:12.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.444 =================================================================================================================== 00:35:12.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:12.444 16:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2839934 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2838452 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2838452 ']' 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2838452 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838452 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838452' 00:35:12.702 killing process with pid 2838452 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2838452 00:35:12.702 [2024-07-22 16:48:32.179085] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:12.702 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2838452 00:35:12.960 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:35:12.960 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:12.960 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2840079 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2840079 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2840079 ']' 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:12.961 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:12.961 [2024-07-22 16:48:32.472879] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:12.961 [2024-07-22 16:48:32.472960] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.961 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.961 [2024-07-22 16:48:32.551450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.219 [2024-07-22 16:48:32.639816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.219 [2024-07-22 16:48:32.639873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.219 [2024-07-22 16:48:32.639899] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.219 [2024-07-22 16:48:32.639913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.219 [2024-07-22 16:48:32.639925] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.219 [2024-07-22 16:48:32.639956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eSenNS7DP3 00:35:13.219 16:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:13.477 [2024-07-22 16:48:33.013500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.478 16:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:13.736 16:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:13.994 [2024-07-22 16:48:33.546931] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:13.994 [2024-07-22 16:48:33.547188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.994 16:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:14.252 malloc0 00:35:14.252 16:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:14.510 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:14.768 [2024-07-22 16:48:34.280053] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:14.768 [2024-07-22 16:48:34.280096] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:35:14.768 [2024-07-22 16:48:34.280131] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:35:14.768 request: 00:35:14.768 { 00:35:14.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.768 "host": "nqn.2016-06.io.spdk:host1", 00:35:14.768 "psk": "/tmp/tmp.eSenNS7DP3", 00:35:14.768 "method": "nvmf_subsystem_add_host", 00:35:14.768 "req_id": 1 00:35:14.768 } 00:35:14.768 Got JSON-RPC error response 00:35:14.768 response: 00:35:14.768 { 00:35:14.768 "code": -32603, 00:35:14.768 "message": "Internal error" 00:35:14.768 } 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2840079 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2840079 ']' 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2840079 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2840079 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2840079' 00:35:14.768 killing process with pid 2840079 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2840079 00:35:14.768 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2840079 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.eSenNS7DP3 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2840372 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2840372 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2840372 ']' 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:15.026 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:15.026 [2024-07-22 16:48:34.637316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:15.026 [2024-07-22 16:48:34.637410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.026 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.285 [2024-07-22 16:48:34.716993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.285 [2024-07-22 16:48:34.805172] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.285 [2024-07-22 16:48:34.805243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.285 [2024-07-22 16:48:34.805268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.285 [2024-07-22 16:48:34.805282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.285 [2024-07-22 16:48:34.805294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.285 [2024-07-22 16:48:34.805325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.285 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:15.285 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:15.285 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:15.285 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.285 16:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:15.543 16:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.543 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:35:15.543 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eSenNS7DP3 00:35:15.543 16:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:15.543 [2024-07-22 16:48:35.170367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.543 16:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:16.109 16:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:16.109 [2024-07-22 16:48:35.675784] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:16.109 [2024-07-22 16:48:35.676059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.109 16:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:16.367 malloc0 00:35:16.367 16:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:16.625 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:16.883 [2024-07-22 16:48:36.428688] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2840650 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2840650 /var/tmp/bdevperf.sock 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2840650 ']' 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:16.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:16.883 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:16.883 [2024-07-22 16:48:36.486787] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:16.883 [2024-07-22 16:48:36.486882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840650 ] 00:35:16.883 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.141 [2024-07-22 16:48:36.560284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.141 [2024-07-22 16:48:36.646061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.141 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:17.141 16:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:17.141 16:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:17.399 [2024-07-22 16:48:36.971744] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:17.399 [2024-07-22 16:48:36.971866] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:17.399 TLSTESTn1 00:35:17.657 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:35:17.915 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:35:17.915 "subsystems": [ 00:35:17.915 { 00:35:17.915 "subsystem": "keyring", 00:35:17.915 "config": [] 00:35:17.915 }, 00:35:17.915 { 00:35:17.915 "subsystem": "iobuf", 00:35:17.915 "config": [ 00:35:17.915 { 00:35:17.915 "method": "iobuf_set_options", 00:35:17.915 "params": { 00:35:17.915 "small_pool_count": 8192, 00:35:17.915 "large_pool_count": 1024, 00:35:17.915 "small_bufsize": 8192, 00:35:17.915 "large_bufsize": 135168 00:35:17.915 } 00:35:17.915 } 00:35:17.915 ] 00:35:17.915 }, 00:35:17.915 { 00:35:17.915 "subsystem": "sock", 00:35:17.915 "config": [ 00:35:17.915 { 00:35:17.915 "method": "sock_set_default_impl", 00:35:17.915 "params": { 00:35:17.915 "impl_name": "posix" 00:35:17.915 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "sock_impl_set_options", 00:35:17.916 "params": { 00:35:17.916 "impl_name": "ssl", 00:35:17.916 "recv_buf_size": 4096, 00:35:17.916 "send_buf_size": 4096, 00:35:17.916 "enable_recv_pipe": true, 00:35:17.916 "enable_quickack": false, 00:35:17.916 "enable_placement_id": 0, 00:35:17.916 "enable_zerocopy_send_server": true, 00:35:17.916 "enable_zerocopy_send_client": false, 00:35:17.916 "zerocopy_threshold": 0, 00:35:17.916 "tls_version": 0, 00:35:17.916 "enable_ktls": false 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "sock_impl_set_options", 00:35:17.916 "params": { 00:35:17.916 "impl_name": "posix", 00:35:17.916 "recv_buf_size": 2097152, 00:35:17.916 "send_buf_size": 2097152, 00:35:17.916 "enable_recv_pipe": true, 00:35:17.916 "enable_quickack": false, 00:35:17.916 "enable_placement_id": 0, 00:35:17.916 "enable_zerocopy_send_server": true, 00:35:17.916 "enable_zerocopy_send_client": false, 00:35:17.916 "zerocopy_threshold": 0, 00:35:17.916 "tls_version": 0, 00:35:17.916 "enable_ktls": false 00:35:17.916 } 00:35:17.916 } 00:35:17.916 ] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "vmd", 00:35:17.916 "config": [] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "accel", 00:35:17.916 "config": [ 00:35:17.916 { 00:35:17.916 "method": "accel_set_options", 00:35:17.916 "params": { 00:35:17.916 "small_cache_size": 128, 00:35:17.916 "large_cache_size": 16, 00:35:17.916 "task_count": 2048, 00:35:17.916 "sequence_count": 2048, 00:35:17.916 "buf_count": 2048 00:35:17.916 } 00:35:17.916 } 00:35:17.916 ] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "bdev", 00:35:17.916 "config": [ 00:35:17.916 { 00:35:17.916 "method": "bdev_set_options", 00:35:17.916 "params": { 00:35:17.916 "bdev_io_pool_size": 65535, 00:35:17.916 "bdev_io_cache_size": 256, 00:35:17.916 "bdev_auto_examine": true, 00:35:17.916 "iobuf_small_cache_size": 128, 00:35:17.916 "iobuf_large_cache_size": 16 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_raid_set_options", 00:35:17.916 "params": { 00:35:17.916 "process_window_size_kb": 1024 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_iscsi_set_options", 00:35:17.916 "params": { 00:35:17.916 "timeout_sec": 30 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_nvme_set_options", 00:35:17.916 "params": { 00:35:17.916 "action_on_timeout": "none", 00:35:17.916 "timeout_us": 0, 00:35:17.916 "timeout_admin_us": 0, 00:35:17.916 "keep_alive_timeout_ms": 10000, 00:35:17.916 "arbitration_burst": 0, 00:35:17.916 "low_priority_weight": 0, 00:35:17.916 "medium_priority_weight": 0, 00:35:17.916 "high_priority_weight": 0, 00:35:17.916 "nvme_adminq_poll_period_us": 10000, 00:35:17.916 "nvme_ioq_poll_period_us": 0, 00:35:17.916 "io_queue_requests": 0, 00:35:17.916 "delay_cmd_submit": true, 00:35:17.916 "transport_retry_count": 4, 00:35:17.916 "bdev_retry_count": 3, 00:35:17.916 "transport_ack_timeout": 0, 00:35:17.916 "ctrlr_loss_timeout_sec": 0, 00:35:17.916 "reconnect_delay_sec": 0, 00:35:17.916 "fast_io_fail_timeout_sec": 0, 00:35:17.916 "disable_auto_failback": false, 00:35:17.916 "generate_uuids": false, 00:35:17.916 "transport_tos": 0, 00:35:17.916 "nvme_error_stat": false, 00:35:17.916 "rdma_srq_size": 0, 00:35:17.916 "io_path_stat": false, 00:35:17.916 "allow_accel_sequence": false, 00:35:17.916 "rdma_max_cq_size": 0, 00:35:17.916 "rdma_cm_event_timeout_ms": 0, 00:35:17.916 "dhchap_digests": [ 00:35:17.916 "sha256", 00:35:17.916 "sha384", 00:35:17.916 "sha512" 00:35:17.916 ], 00:35:17.916 "dhchap_dhgroups": [ 00:35:17.916 "null", 00:35:17.916 "ffdhe2048", 00:35:17.916 "ffdhe3072", 00:35:17.916 "ffdhe4096", 00:35:17.916 "ffdhe6144", 00:35:17.916 "ffdhe8192" 00:35:17.916 ] 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_nvme_set_hotplug", 00:35:17.916 "params": { 00:35:17.916 "period_us": 100000, 00:35:17.916 "enable": false 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_malloc_create", 00:35:17.916 "params": { 00:35:17.916 "name": "malloc0", 00:35:17.916 "num_blocks": 8192, 00:35:17.916 "block_size": 4096, 00:35:17.916 "physical_block_size": 4096, 00:35:17.916 "uuid": "0d105bb6-8a20-4ba9-93ef-80047662ae19", 00:35:17.916 "optimal_io_boundary": 0 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "bdev_wait_for_examine" 00:35:17.916 } 00:35:17.916 ] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "nbd", 00:35:17.916 "config": [] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "scheduler", 00:35:17.916 "config": [ 00:35:17.916 { 00:35:17.916 "method": "framework_set_scheduler", 00:35:17.916 "params": { 00:35:17.916 "name": "static" 00:35:17.916 } 00:35:17.916 } 00:35:17.916 ] 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "subsystem": "nvmf", 00:35:17.916 "config": [ 00:35:17.916 { 00:35:17.916 "method": "nvmf_set_config", 00:35:17.916 "params": { 00:35:17.916 "discovery_filter": "match_any", 00:35:17.916 "admin_cmd_passthru": { 00:35:17.916 "identify_ctrlr": false 00:35:17.916 } 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_set_max_subsystems", 00:35:17.916 "params": { 00:35:17.916 "max_subsystems": 1024 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_set_crdt", 00:35:17.916 "params": { 00:35:17.916 "crdt1": 0, 00:35:17.916 "crdt2": 0, 00:35:17.916 "crdt3": 0 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_create_transport", 00:35:17.916 "params": { 00:35:17.916 "trtype": "TCP", 00:35:17.916 "max_queue_depth": 128, 00:35:17.916 "max_io_qpairs_per_ctrlr": 127, 00:35:17.916 "in_capsule_data_size": 4096, 00:35:17.916 "max_io_size": 131072, 00:35:17.916 "io_unit_size": 131072, 00:35:17.916 "max_aq_depth": 128, 00:35:17.916 "num_shared_buffers": 511, 00:35:17.916 "buf_cache_size": 4294967295, 00:35:17.916 "dif_insert_or_strip": false, 00:35:17.916 "zcopy": false, 00:35:17.916 "c2h_success": false, 00:35:17.916 "sock_priority": 0, 00:35:17.916 "abort_timeout_sec": 1, 00:35:17.916 "ack_timeout": 0, 00:35:17.916 "data_wr_pool_size": 0 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_create_subsystem", 00:35:17.916 "params": { 00:35:17.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.916 "allow_any_host": false, 00:35:17.916 "serial_number": "SPDK00000000000001", 00:35:17.916 "model_number": "SPDK bdev Controller", 00:35:17.916 "max_namespaces": 10, 00:35:17.916 "min_cntlid": 1, 00:35:17.916 "max_cntlid": 65519, 00:35:17.916 "ana_reporting": false 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_subsystem_add_host", 00:35:17.916 "params": { 00:35:17.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.916 "host": "nqn.2016-06.io.spdk:host1", 00:35:17.916 "psk": "/tmp/tmp.eSenNS7DP3" 00:35:17.916 } 00:35:17.916 }, 00:35:17.916 { 00:35:17.916 "method": "nvmf_subsystem_add_ns", 00:35:17.916 "params": { 00:35:17.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.917 "namespace": { 00:35:17.917 "nsid": 1, 00:35:17.917 "bdev_name": "malloc0", 00:35:17.917 "nguid": "0D105BB68A204BA993EF80047662AE19", 00:35:17.917 "uuid": "0d105bb6-8a20-4ba9-93ef-80047662ae19", 00:35:17.917 "no_auto_visible": false 00:35:17.917 } 00:35:17.917 } 00:35:17.917 }, 00:35:17.917 { 00:35:17.917 "method": "nvmf_subsystem_add_listener", 00:35:17.917 "params": { 00:35:17.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.917 "listen_address": { 00:35:17.917 "trtype": "TCP", 00:35:17.917 "adrfam": "IPv4", 00:35:17.917 "traddr": "10.0.0.2", 00:35:17.917 "trsvcid": "4420" 00:35:17.917 }, 00:35:17.917 "secure_channel": true 00:35:17.917 } 00:35:17.917 } 00:35:17.917 ] 00:35:17.917 } 00:35:17.917 ] 00:35:17.917 }' 00:35:17.917 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:35:18.175 "subsystems": [ 00:35:18.175 { 00:35:18.175 "subsystem": "keyring", 00:35:18.175 "config": [] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "iobuf", 00:35:18.175 "config": [ 00:35:18.175 { 00:35:18.175 "method": "iobuf_set_options", 00:35:18.175 "params": { 00:35:18.175 "small_pool_count": 8192, 00:35:18.175 "large_pool_count": 1024, 00:35:18.175 "small_bufsize": 8192, 00:35:18.175 "large_bufsize": 135168 00:35:18.175 } 00:35:18.175 } 00:35:18.175 ] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "sock", 00:35:18.175 "config": [ 00:35:18.175 { 00:35:18.175 "method": "sock_set_default_impl", 00:35:18.175 "params": { 00:35:18.175 "impl_name": "posix" 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "sock_impl_set_options", 00:35:18.175 "params": { 00:35:18.175 "impl_name": "ssl", 00:35:18.175 "recv_buf_size": 4096, 00:35:18.175 "send_buf_size": 4096, 00:35:18.175 "enable_recv_pipe": true, 00:35:18.175 "enable_quickack": false, 00:35:18.175 "enable_placement_id": 0, 00:35:18.175 "enable_zerocopy_send_server": true, 00:35:18.175 "enable_zerocopy_send_client": false, 00:35:18.175 "zerocopy_threshold": 0, 00:35:18.175 "tls_version": 0, 00:35:18.175 "enable_ktls": false 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "sock_impl_set_options", 00:35:18.175 "params": { 00:35:18.175 "impl_name": "posix", 00:35:18.175 "recv_buf_size": 2097152, 00:35:18.175 "send_buf_size": 2097152, 00:35:18.175 "enable_recv_pipe": true, 00:35:18.175 "enable_quickack": false, 00:35:18.175 "enable_placement_id": 0, 00:35:18.175 "enable_zerocopy_send_server": true, 00:35:18.175 "enable_zerocopy_send_client": false, 00:35:18.175 "zerocopy_threshold": 0, 00:35:18.175 "tls_version": 0, 00:35:18.175 "enable_ktls": false 00:35:18.175 } 00:35:18.175 } 00:35:18.175 ] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "vmd", 00:35:18.175 "config": [] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "accel", 00:35:18.175 "config": [ 00:35:18.175 { 00:35:18.175 "method": "accel_set_options", 00:35:18.175 "params": { 00:35:18.175 "small_cache_size": 128, 00:35:18.175 "large_cache_size": 16, 00:35:18.175 "task_count": 2048, 00:35:18.175 "sequence_count": 2048, 00:35:18.175 "buf_count": 2048 00:35:18.175 } 00:35:18.175 } 00:35:18.175 ] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "bdev", 00:35:18.175 "config": [ 00:35:18.175 { 00:35:18.175 "method": "bdev_set_options", 00:35:18.175 "params": { 00:35:18.175 "bdev_io_pool_size": 65535, 00:35:18.175 "bdev_io_cache_size": 256, 00:35:18.175 "bdev_auto_examine": true, 00:35:18.175 "iobuf_small_cache_size": 128, 00:35:18.175 "iobuf_large_cache_size": 16 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_raid_set_options", 00:35:18.175 "params": { 00:35:18.175 "process_window_size_kb": 1024 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_iscsi_set_options", 00:35:18.175 "params": { 00:35:18.175 "timeout_sec": 30 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_nvme_set_options", 00:35:18.175 "params": { 00:35:18.175 "action_on_timeout": "none", 00:35:18.175 "timeout_us": 0, 00:35:18.175 "timeout_admin_us": 0, 00:35:18.175 "keep_alive_timeout_ms": 10000, 00:35:18.175 "arbitration_burst": 0, 00:35:18.175 "low_priority_weight": 0, 00:35:18.175 "medium_priority_weight": 0, 00:35:18.175 "high_priority_weight": 0, 00:35:18.175 "nvme_adminq_poll_period_us": 10000, 00:35:18.175 "nvme_ioq_poll_period_us": 0, 00:35:18.175 "io_queue_requests": 512, 00:35:18.175 "delay_cmd_submit": true, 00:35:18.175 "transport_retry_count": 4, 00:35:18.175 "bdev_retry_count": 3, 00:35:18.175 "transport_ack_timeout": 0, 00:35:18.175 "ctrlr_loss_timeout_sec": 0, 00:35:18.175 "reconnect_delay_sec": 0, 00:35:18.175 "fast_io_fail_timeout_sec": 0, 00:35:18.175 "disable_auto_failback": false, 00:35:18.175 "generate_uuids": false, 00:35:18.175 "transport_tos": 0, 00:35:18.175 "nvme_error_stat": false, 00:35:18.175 "rdma_srq_size": 0, 00:35:18.175 "io_path_stat": false, 00:35:18.175 "allow_accel_sequence": false, 00:35:18.175 "rdma_max_cq_size": 0, 00:35:18.175 "rdma_cm_event_timeout_ms": 0, 00:35:18.175 "dhchap_digests": [ 00:35:18.175 "sha256", 00:35:18.175 "sha384", 00:35:18.175 "sha512" 00:35:18.175 ], 00:35:18.175 "dhchap_dhgroups": [ 00:35:18.175 "null", 00:35:18.175 "ffdhe2048", 00:35:18.175 "ffdhe3072", 00:35:18.175 "ffdhe4096", 00:35:18.175 "ffdhe6144", 00:35:18.175 "ffdhe8192" 00:35:18.175 ] 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_nvme_attach_controller", 00:35:18.175 "params": { 00:35:18.175 "name": "TLSTEST", 00:35:18.175 "trtype": "TCP", 00:35:18.175 "adrfam": "IPv4", 00:35:18.175 "traddr": "10.0.0.2", 00:35:18.175 "trsvcid": "4420", 00:35:18.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.175 "prchk_reftag": false, 00:35:18.175 "prchk_guard": false, 00:35:18.175 "ctrlr_loss_timeout_sec": 0, 00:35:18.175 "reconnect_delay_sec": 0, 00:35:18.175 "fast_io_fail_timeout_sec": 0, 00:35:18.175 "psk": "/tmp/tmp.eSenNS7DP3", 00:35:18.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.175 "hdgst": false, 00:35:18.175 "ddgst": false 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_nvme_set_hotplug", 00:35:18.175 "params": { 00:35:18.175 "period_us": 100000, 00:35:18.175 "enable": false 00:35:18.175 } 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "method": "bdev_wait_for_examine" 00:35:18.175 } 00:35:18.175 ] 00:35:18.175 }, 00:35:18.175 { 00:35:18.175 "subsystem": "nbd", 00:35:18.175 "config": [] 00:35:18.175 } 00:35:18.175 ] 00:35:18.175 }' 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2840650 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2840650 ']' 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2840650 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2840650 00:35:18.175 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:18.176 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:18.176 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2840650' 00:35:18.176 killing process with pid 2840650 00:35:18.176 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2840650 00:35:18.176 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.176 00:35:18.176 Latency(us) 00:35:18.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.176 =================================================================================================================== 00:35:18.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:18.176 [2024-07-22 16:48:37.721393] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:18.176 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2840650 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2840372 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2840372 ']' 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2840372 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2840372 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2840372' 00:35:18.433 killing process with pid 2840372 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2840372 00:35:18.433 [2024-07-22 16:48:37.970199] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:18.433 16:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2840372 00:35:18.692 16:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:35:18.692 16:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:18.692 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:18.692 16:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:35:18.692 "subsystems": [ 00:35:18.692 { 00:35:18.692 "subsystem": "keyring", 00:35:18.692 "config": [] 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "subsystem": "iobuf", 00:35:18.692 "config": [ 00:35:18.692 { 00:35:18.692 "method": "iobuf_set_options", 00:35:18.692 "params": { 00:35:18.692 "small_pool_count": 8192, 00:35:18.692 "large_pool_count": 1024, 00:35:18.692 "small_bufsize": 8192, 00:35:18.692 "large_bufsize": 135168 00:35:18.692 } 00:35:18.692 } 00:35:18.692 ] 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "subsystem": "sock", 00:35:18.692 "config": [ 00:35:18.692 { 00:35:18.692 "method": "sock_set_default_impl", 00:35:18.692 "params": { 00:35:18.692 "impl_name": "posix" 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "sock_impl_set_options", 00:35:18.692 "params": { 00:35:18.692 "impl_name": "ssl", 00:35:18.692 "recv_buf_size": 4096, 00:35:18.692 "send_buf_size": 4096, 00:35:18.692 "enable_recv_pipe": true, 00:35:18.692 "enable_quickack": false, 00:35:18.692 "enable_placement_id": 0, 00:35:18.692 "enable_zerocopy_send_server": true, 00:35:18.692 "enable_zerocopy_send_client": false, 00:35:18.692 "zerocopy_threshold": 0, 00:35:18.692 "tls_version": 0, 00:35:18.692 "enable_ktls": false 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "sock_impl_set_options", 00:35:18.692 "params": { 00:35:18.692 "impl_name": "posix", 00:35:18.692 "recv_buf_size": 2097152, 00:35:18.692 "send_buf_size": 2097152, 00:35:18.692 "enable_recv_pipe": true, 00:35:18.692 "enable_quickack": false, 00:35:18.692 "enable_placement_id": 0, 00:35:18.692 "enable_zerocopy_send_server": true, 00:35:18.692 "enable_zerocopy_send_client": false, 00:35:18.692 "zerocopy_threshold": 0, 00:35:18.692 "tls_version": 0, 00:35:18.692 "enable_ktls": false 00:35:18.692 } 00:35:18.692 } 00:35:18.692 ] 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "subsystem": "vmd", 00:35:18.692 "config": [] 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "subsystem": "accel", 00:35:18.692 "config": [ 00:35:18.692 { 00:35:18.692 "method": "accel_set_options", 00:35:18.692 "params": { 00:35:18.692 "small_cache_size": 128, 00:35:18.692 "large_cache_size": 16, 00:35:18.692 "task_count": 2048, 00:35:18.692 "sequence_count": 2048, 00:35:18.692 "buf_count": 2048 00:35:18.692 } 00:35:18.692 } 00:35:18.692 ] 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "subsystem": "bdev", 00:35:18.692 "config": [ 00:35:18.692 { 00:35:18.692 "method": "bdev_set_options", 00:35:18.692 "params": { 00:35:18.692 "bdev_io_pool_size": 65535, 00:35:18.692 "bdev_io_cache_size": 256, 00:35:18.692 "bdev_auto_examine": true, 00:35:18.692 "iobuf_small_cache_size": 128, 00:35:18.692 "iobuf_large_cache_size": 16 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "bdev_raid_set_options", 00:35:18.692 "params": { 00:35:18.692 "process_window_size_kb": 1024 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "bdev_iscsi_set_options", 00:35:18.692 "params": { 00:35:18.692 "timeout_sec": 30 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "bdev_nvme_set_options", 00:35:18.692 "params": { 00:35:18.692 "action_on_timeout": "none", 00:35:18.692 "timeout_us": 0, 00:35:18.692 "timeout_admin_us": 0, 00:35:18.692 "keep_alive_timeout_ms": 10000, 00:35:18.692 "arbitration_burst": 0, 00:35:18.692 "low_priority_weight": 0, 00:35:18.692 "medium_priority_weight": 0, 00:35:18.692 "high_priority_weight": 0, 00:35:18.692 "nvme_adminq_poll_period_us": 10000, 00:35:18.692 "nvme_ioq_poll_period_us": 0, 00:35:18.692 "io_queue_requests": 0, 00:35:18.692 "delay_cmd_submit": true, 00:35:18.692 "transport_retry_count": 4, 00:35:18.692 "bdev_retry_count": 3, 00:35:18.692 "transport_ack_timeout": 0, 00:35:18.692 "ctrlr_loss_timeout_sec": 0, 00:35:18.692 "reconnect_delay_sec": 0, 00:35:18.692 "fast_io_fail_timeout_sec": 0, 00:35:18.692 "disable_auto_failback": false, 00:35:18.692 "generate_uuids": false, 00:35:18.692 "transport_tos": 0, 00:35:18.692 "nvme_error_stat": false, 00:35:18.692 "rdma_srq_size": 0, 00:35:18.692 "io_path_stat": false, 00:35:18.692 "allow_accel_sequence": false, 00:35:18.692 "rdma_max_cq_size": 0, 00:35:18.692 "rdma_cm_event_timeout_ms": 0, 00:35:18.692 "dhchap_digests": [ 00:35:18.692 "sha256", 00:35:18.692 "sha384", 00:35:18.692 "sha512" 00:35:18.692 ], 00:35:18.692 "dhchap_dhgroups": [ 00:35:18.692 "null", 00:35:18.692 "ffdhe2048", 00:35:18.692 "ffdhe3072", 00:35:18.692 "ffdhe4096", 00:35:18.692 "ffdhe6144", 00:35:18.692 "ffdhe8192" 00:35:18.692 ] 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "bdev_nvme_set_hotplug", 00:35:18.692 "params": { 00:35:18.692 "period_us": 100000, 00:35:18.692 "enable": false 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.692 "method": "bdev_malloc_create", 00:35:18.692 "params": { 00:35:18.692 "name": "malloc0", 00:35:18.692 "num_blocks": 8192, 00:35:18.692 "block_size": 4096, 00:35:18.692 "physical_block_size": 4096, 00:35:18.692 "uuid": "0d105bb6-8a20-4ba9-93ef-80047662ae19", 00:35:18.692 "optimal_io_boundary": 0 00:35:18.692 } 00:35:18.692 }, 00:35:18.692 { 00:35:18.693 "method": "bdev_wait_for_examine" 00:35:18.693 } 00:35:18.693 ] 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "subsystem": "nbd", 00:35:18.693 "config": [] 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "subsystem": "scheduler", 00:35:18.693 "config": [ 00:35:18.693 { 00:35:18.693 "method": "framework_set_scheduler", 00:35:18.693 "params": { 00:35:18.693 "name": "static" 00:35:18.693 } 00:35:18.693 } 00:35:18.693 ] 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "subsystem": "nvmf", 00:35:18.693 "config": [ 00:35:18.693 { 00:35:18.693 "method": "nvmf_set_config", 00:35:18.693 "params": { 00:35:18.693 "discovery_filter": "match_any", 00:35:18.693 "admin_cmd_passthru": { 00:35:18.693 "identify_ctrlr": false 00:35:18.693 } 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_set_max_subsystems", 00:35:18.693 "params": { 00:35:18.693 "max_subsystems": 1024 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_set_crdt", 00:35:18.693 "params": { 00:35:18.693 "crdt1": 0, 00:35:18.693 "crdt2": 0, 00:35:18.693 "crdt3": 0 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_create_transport", 00:35:18.693 "params": { 00:35:18.693 "trtype": "TCP", 00:35:18.693 "max_queue_depth": 128, 00:35:18.693 "max_io_qpairs_per_ctrlr": 127, 00:35:18.693 "in_capsule_data_size": 4096, 00:35:18.693 "max_io_size": 131072, 00:35:18.693 "io_unit_size": 131072, 00:35:18.693 "max_aq_depth": 128, 00:35:18.693 "num_shared_buffers": 511, 00:35:18.693 "buf_cache_size": 4294967295, 00:35:18.693 "dif_insert_or_strip": false, 00:35:18.693 "zcopy": false, 00:35:18.693 "c2h_success": false, 00:35:18.693 "sock_priority": 0, 00:35:18.693 "abort_timeout_sec": 1, 00:35:18.693 "ack_timeout": 0, 00:35:18.693 "data_wr_pool_size": 0 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_create_subsystem", 00:35:18.693 "params": { 00:35:18.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.693 "allow_any_host": false, 00:35:18.693 "serial_number": "SPDK00000000000001", 00:35:18.693 "model_number": "SPDK bdev Controller", 00:35:18.693 "max_namespaces": 10, 00:35:18.693 "min_cntlid": 1, 00:35:18.693 "max_cntlid": 65519, 00:35:18.693 "ana_reporting": false 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_subsystem_add_host", 00:35:18.693 "params": { 00:35:18.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.693 "host": "nqn.2016-06.io.spdk:host1", 00:35:18.693 "psk": "/tmp/tmp.eSenNS7DP3" 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_subsystem_add_ns", 00:35:18.693 "params": { 00:35:18.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.693 "namespace": { 00:35:18.693 "nsid": 1, 00:35:18.693 "bdev_name": "malloc0", 00:35:18.693 "nguid": "0D105BB68A204BA993EF80047662AE19", 00:35:18.693 "uuid": "0d105bb6-8a20-4ba9-93ef-80047662ae19", 00:35:18.693 "no_auto_visible": false 00:35:18.693 } 00:35:18.693 } 00:35:18.693 }, 00:35:18.693 { 00:35:18.693 "method": "nvmf_subsystem_add_listener", 00:35:18.693 "params": { 00:35:18.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.693 "listen_address": { 00:35:18.693 "trtype": "TCP", 00:35:18.693 "adrfam": "IPv4", 00:35:18.693 "traddr": "10.0.0.2", 00:35:18.693 "trsvcid": "4420" 00:35:18.693 }, 00:35:18.693 "secure_channel": true 00:35:18.693 } 00:35:18.693 } 00:35:18.693 ] 00:35:18.693 } 00:35:18.693 ] 00:35:18.693 }' 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2840812 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2840812 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2840812 ']' 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:18.693 16:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.693 [2024-07-22 16:48:38.278680] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:18.693 [2024-07-22 16:48:38.278782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.693 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.952 [2024-07-22 16:48:38.356853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.952 [2024-07-22 16:48:38.443869] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.952 [2024-07-22 16:48:38.443930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.952 [2024-07-22 16:48:38.443957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.952 [2024-07-22 16:48:38.443979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.952 [2024-07-22 16:48:38.443992] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.952 [2024-07-22 16:48:38.444083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.210 [2024-07-22 16:48:38.679960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.210 [2024-07-22 16:48:38.695907] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:19.210 [2024-07-22 16:48:38.711962] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:19.210 [2024-07-22 16:48:38.723178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2840966 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2840966 /var/tmp/bdevperf.sock 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2840966 ']' 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:19.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:19.775 16:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:35:19.775 "subsystems": [ 00:35:19.775 { 00:35:19.775 "subsystem": "keyring", 00:35:19.775 "config": [] 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "subsystem": "iobuf", 00:35:19.775 "config": [ 00:35:19.775 { 00:35:19.775 "method": "iobuf_set_options", 00:35:19.775 "params": { 00:35:19.775 "small_pool_count": 8192, 00:35:19.775 "large_pool_count": 1024, 00:35:19.775 "small_bufsize": 8192, 00:35:19.775 "large_bufsize": 135168 00:35:19.775 } 00:35:19.775 } 00:35:19.775 ] 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "subsystem": "sock", 00:35:19.775 "config": [ 00:35:19.775 { 00:35:19.775 "method": "sock_set_default_impl", 00:35:19.775 "params": { 00:35:19.775 "impl_name": "posix" 00:35:19.775 } 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "method": "sock_impl_set_options", 00:35:19.775 "params": { 00:35:19.775 "impl_name": "ssl", 00:35:19.775 "recv_buf_size": 4096, 00:35:19.775 "send_buf_size": 4096, 00:35:19.775 "enable_recv_pipe": true, 00:35:19.775 "enable_quickack": false, 00:35:19.775 "enable_placement_id": 0, 00:35:19.775 "enable_zerocopy_send_server": true, 00:35:19.775 "enable_zerocopy_send_client": false, 00:35:19.775 "zerocopy_threshold": 0, 00:35:19.775 "tls_version": 0, 00:35:19.775 "enable_ktls": false 00:35:19.775 } 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "method": "sock_impl_set_options", 00:35:19.775 "params": { 00:35:19.775 "impl_name": "posix", 00:35:19.775 "recv_buf_size": 2097152, 00:35:19.775 "send_buf_size": 2097152, 00:35:19.775 "enable_recv_pipe": true, 00:35:19.775 "enable_quickack": false, 00:35:19.775 "enable_placement_id": 0, 00:35:19.775 "enable_zerocopy_send_server": true, 00:35:19.775 "enable_zerocopy_send_client": false, 00:35:19.775 "zerocopy_threshold": 0, 00:35:19.775 "tls_version": 0, 00:35:19.775 "enable_ktls": false 00:35:19.775 } 00:35:19.775 } 00:35:19.775 ] 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "subsystem": "vmd", 00:35:19.775 "config": [] 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "subsystem": "accel", 00:35:19.775 "config": [ 00:35:19.775 { 00:35:19.775 "method": "accel_set_options", 00:35:19.775 "params": { 00:35:19.775 "small_cache_size": 128, 00:35:19.775 "large_cache_size": 16, 00:35:19.775 "task_count": 2048, 00:35:19.775 "sequence_count": 2048, 00:35:19.775 "buf_count": 2048 00:35:19.775 } 00:35:19.775 } 00:35:19.775 ] 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "subsystem": "bdev", 00:35:19.775 "config": [ 00:35:19.775 { 00:35:19.775 "method": "bdev_set_options", 00:35:19.775 "params": { 00:35:19.775 "bdev_io_pool_size": 65535, 00:35:19.775 "bdev_io_cache_size": 256, 00:35:19.775 "bdev_auto_examine": true, 00:35:19.775 "iobuf_small_cache_size": 128, 00:35:19.775 "iobuf_large_cache_size": 16 00:35:19.775 } 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "method": "bdev_raid_set_options", 00:35:19.775 "params": { 00:35:19.775 "process_window_size_kb": 1024 00:35:19.775 } 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "method": "bdev_iscsi_set_options", 00:35:19.775 "params": { 00:35:19.775 "timeout_sec": 30 00:35:19.775 } 00:35:19.775 }, 00:35:19.775 { 00:35:19.775 "method": "bdev_nvme_set_options", 00:35:19.775 "params": { 00:35:19.775 "action_on_timeout": "none", 00:35:19.775 "timeout_us": 0, 00:35:19.775 "timeout_admin_us": 0, 00:35:19.775 "keep_alive_timeout_ms": 10000, 00:35:19.775 "arbitration_burst": 0, 00:35:19.775 "low_priority_weight": 0, 00:35:19.775 "medium_priority_weight": 0, 00:35:19.775 "high_priority_weight": 0, 00:35:19.775 "nvme_adminq_poll_period_us": 10000, 00:35:19.775 "nvme_ioq_poll_period_us": 0, 00:35:19.775 "io_queue_requests": 512, 00:35:19.775 "delay_cmd_submit": true, 00:35:19.775 "transport_retry_count": 4, 00:35:19.775 "bdev_retry_count": 3, 00:35:19.775 "transport_ack_timeout": 0, 00:35:19.775 "ctrlr_loss_timeout_sec": 0, 00:35:19.775 "reconnect_delay_sec": 0, 00:35:19.775 "fast_io_fail_timeout_sec": 0, 00:35:19.775 "disable_auto_failback": false, 00:35:19.775 "generate_uuids": false, 00:35:19.776 "transport_tos": 0, 00:35:19.776 "nvme_error_stat": false, 00:35:19.776 "rdma_srq_size": 0, 00:35:19.776 "io_path_stat": false, 00:35:19.776 "allow_accel_sequence": false, 00:35:19.776 "rdma_max_cq_size": 0, 00:35:19.776 "rdma_cm_event_timeout_ms": 0, 00:35:19.776 "dhchap_digests": [ 00:35:19.776 "sha256", 00:35:19.776 "sha384", 00:35:19.776 "sha512" 00:35:19.776 ], 00:35:19.776 "dhchap_dhgroups": [ 00:35:19.776 "null", 00:35:19.776 "ffdhe2048", 00:35:19.776 "ffdhe3072", 00:35:19.776 "ffdhe4096", 00:35:19.776 "ffdhe6144", 00:35:19.776 "ffdhe8192" 00:35:19.776 ] 00:35:19.776 } 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "method": "bdev_nvme_attach_controller", 00:35:19.776 "params": { 00:35:19.776 "name": "TLSTEST", 00:35:19.776 "trtype": "TCP", 00:35:19.776 "adrfam": "IPv4", 00:35:19.776 "traddr": "10.0.0.2", 00:35:19.776 "trsvcid": "4420", 00:35:19.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.776 "prchk_reftag": false, 00:35:19.776 "prchk_guard": false, 00:35:19.776 "ctrlr_loss_timeout_sec": 0, 00:35:19.776 "reconnect_delay_sec": 0, 00:35:19.776 "fast_io_fail_timeout_sec": 0, 00:35:19.776 "psk": "/tmp/tmp.eSenNS7DP3", 00:35:19.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.776 "hdgst": false, 00:35:19.776 "ddgst": false 00:35:19.776 } 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "method": "bdev_nvme_set_hotplug", 00:35:19.776 "params": { 00:35:19.776 "period_us": 100000, 00:35:19.776 "enable": false 00:35:19.776 } 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "method": "bdev_wait_for_examine" 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "subsystem": "nbd", 00:35:19.776 "config": [] 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 }' 00:35:19.776 16:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.776 [2024-07-22 16:48:39.258915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:19.776 [2024-07-22 16:48:39.259039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840966 ] 00:35:19.776 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.776 [2024-07-22 16:48:39.325996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.776 [2024-07-22 16:48:39.411984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:20.033 [2024-07-22 16:48:39.572621] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:20.033 [2024-07-22 16:48:39.572758] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:20.596 16:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:20.596 16:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:20.596 16:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:20.854 Running I/O for 10 seconds... 00:35:30.818 00:35:30.818 Latency(us) 00:35:30.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.818 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:30.818 Verification LBA range: start 0x0 length 0x2000 00:35:30.818 TLSTESTn1 : 10.02 3637.97 14.21 0.00 0.00 35120.87 5946.79 89323.14 00:35:30.818 =================================================================================================================== 00:35:30.818 Total : 3637.97 14.21 0.00 0.00 35120.87 5946.79 89323.14 00:35:30.818 0 00:35:30.818 16:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:30.818 16:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2840966 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2840966 ']' 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2840966 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2840966 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2840966' 00:35:30.819 killing process with pid 2840966 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2840966 00:35:30.819 Received shutdown signal, test time was about 10.000000 seconds 00:35:30.819 00:35:30.819 Latency(us) 00:35:30.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.819 =================================================================================================================== 00:35:30.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.819 [2024-07-22 16:48:50.420793] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:30.819 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2840966 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2840812 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2840812 ']' 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2840812 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:31.076 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2840812 00:35:31.077 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:31.077 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:31.077 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2840812' 00:35:31.077 killing process with pid 2840812 00:35:31.077 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2840812 00:35:31.077 [2024-07-22 16:48:50.673168] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:31.077 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2840812 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2842294 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2842294 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2842294 ']' 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:31.335 16:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:31.335 [2024-07-22 16:48:50.977530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:31.335 [2024-07-22 16:48:50.977645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.593 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.594 [2024-07-22 16:48:51.056811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.594 [2024-07-22 16:48:51.144374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.594 [2024-07-22 16:48:51.144438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.594 [2024-07-22 16:48:51.144465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.594 [2024-07-22 16:48:51.144480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.594 [2024-07-22 16:48:51.144492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.594 [2024-07-22 16:48:51.144530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.eSenNS7DP3 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eSenNS7DP3 00:35:31.851 16:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:32.109 [2024-07-22 16:48:51.514123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.109 16:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:32.367 16:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:32.367 [2024-07-22 16:48:51.999468] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:32.367 [2024-07-22 16:48:51.999698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.625 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:32.625 malloc0 00:35:32.625 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:32.883 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eSenNS7DP3 00:35:33.141 [2024-07-22 16:48:52.740482] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2842571 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2842571 /var/tmp/bdevperf.sock 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2842571 ']' 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:33.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:33.141 16:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:33.399 [2024-07-22 16:48:52.804776] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:33.399 [2024-07-22 16:48:52.804859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842571 ] 00:35:33.399 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.399 [2024-07-22 16:48:52.876570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.399 [2024-07-22 16:48:52.969190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.656 16:48:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:33.656 16:48:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:33.656 16:48:53 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eSenNS7DP3 00:35:33.914 16:48:53 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:34.172 [2024-07-22 16:48:53.572582] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:34.172 nvme0n1 00:35:34.172 16:48:53 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:34.172 Running I/O for 1 seconds... 00:35:35.545 00:35:35.545 Latency(us) 00:35:35.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.545 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:35.545 Verification LBA range: start 0x0 length 0x2000 00:35:35.545 nvme0n1 : 1.02 3226.98 12.61 0.00 0.00 39206.21 5898.24 48933.55 00:35:35.545 =================================================================================================================== 00:35:35.545 Total : 3226.98 12.61 0.00 0.00 39206.21 5898.24 48933.55 00:35:35.545 0 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2842571 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2842571 ']' 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2842571 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2842571 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2842571' 00:35:35.545 killing process with pid 2842571 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2842571 00:35:35.545 Received shutdown signal, test time was about 1.000000 seconds 00:35:35.545 00:35:35.545 Latency(us) 00:35:35.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.545 =================================================================================================================== 00:35:35.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.545 16:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2842571 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2842294 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2842294 ']' 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2842294 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2842294 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2842294' 00:35:35.545 killing process with pid 2842294 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2842294 00:35:35.545 [2024-07-22 16:48:55.079704] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:35.545 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2842294 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2842856 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2842856 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2842856 ']' 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:35.804 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:35.804 [2024-07-22 16:48:55.353487] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:35.804 [2024-07-22 16:48:55.353582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.804 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.804 [2024-07-22 16:48:55.430783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.062 [2024-07-22 16:48:55.515070] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.062 [2024-07-22 16:48:55.515124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.062 [2024-07-22 16:48:55.515146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.062 [2024-07-22 16:48:55.515157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.062 [2024-07-22 16:48:55.515167] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.062 [2024-07-22 16:48:55.515193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.062 [2024-07-22 16:48:55.643754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.062 malloc0 00:35:36.062 [2024-07-22 16:48:55.675781] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:36.062 [2024-07-22 16:48:55.676066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2842966 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2842966 /var/tmp/bdevperf.sock 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2842966 ']' 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:36.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:36.062 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:36.063 16:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.321 [2024-07-22 16:48:55.746932] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:36.321 [2024-07-22 16:48:55.747036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842966 ] 00:35:36.321 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.321 [2024-07-22 16:48:55.817544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.321 [2024-07-22 16:48:55.908032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.580 16:48:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:36.580 16:48:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:36.580 16:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eSenNS7DP3 00:35:36.838 16:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:36.838 [2024-07-22 16:48:56.474443] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:37.096 nvme0n1 00:35:37.096 16:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:37.096 Running I/O for 1 seconds... 00:35:38.469 00:35:38.469 Latency(us) 00:35:38.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.469 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:38.469 Verification LBA range: start 0x0 length 0x2000 00:35:38.469 nvme0n1 : 1.02 3124.56 12.21 0.00 0.00 40478.41 6602.15 38059.43 00:35:38.469 =================================================================================================================== 00:35:38.469 Total : 3124.56 12.21 0.00 0.00 40478.41 6602.15 38059.43 00:35:38.469 0 00:35:38.469 16:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:35:38.469 16:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.469 16:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:38.469 16:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.469 16:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:35:38.469 "subsystems": [ 00:35:38.469 { 00:35:38.469 "subsystem": "keyring", 00:35:38.469 "config": [ 00:35:38.469 { 00:35:38.469 "method": "keyring_file_add_key", 00:35:38.469 "params": { 00:35:38.469 "name": "key0", 00:35:38.469 "path": "/tmp/tmp.eSenNS7DP3" 00:35:38.469 } 00:35:38.469 } 00:35:38.469 ] 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "subsystem": "iobuf", 00:35:38.469 "config": [ 00:35:38.469 { 00:35:38.469 "method": "iobuf_set_options", 00:35:38.469 "params": { 00:35:38.469 "small_pool_count": 8192, 00:35:38.469 "large_pool_count": 1024, 00:35:38.469 "small_bufsize": 8192, 00:35:38.469 "large_bufsize": 135168 00:35:38.469 } 00:35:38.469 } 00:35:38.469 ] 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "subsystem": "sock", 00:35:38.469 "config": [ 00:35:38.469 { 00:35:38.469 "method": "sock_set_default_impl", 00:35:38.469 "params": { 00:35:38.469 "impl_name": "posix" 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "sock_impl_set_options", 00:35:38.469 "params": { 00:35:38.469 "impl_name": "ssl", 00:35:38.469 "recv_buf_size": 4096, 00:35:38.469 "send_buf_size": 4096, 00:35:38.469 "enable_recv_pipe": true, 00:35:38.469 "enable_quickack": false, 00:35:38.469 "enable_placement_id": 0, 00:35:38.469 "enable_zerocopy_send_server": true, 00:35:38.469 "enable_zerocopy_send_client": false, 00:35:38.469 "zerocopy_threshold": 0, 00:35:38.469 "tls_version": 0, 00:35:38.469 "enable_ktls": false 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "sock_impl_set_options", 00:35:38.469 "params": { 00:35:38.469 "impl_name": "posix", 00:35:38.469 "recv_buf_size": 2097152, 00:35:38.469 "send_buf_size": 2097152, 00:35:38.469 "enable_recv_pipe": true, 00:35:38.469 "enable_quickack": false, 00:35:38.469 "enable_placement_id": 0, 00:35:38.469 "enable_zerocopy_send_server": true, 00:35:38.469 "enable_zerocopy_send_client": false, 00:35:38.469 "zerocopy_threshold": 0, 00:35:38.469 "tls_version": 0, 00:35:38.469 "enable_ktls": false 00:35:38.469 } 00:35:38.469 } 00:35:38.469 ] 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "subsystem": "vmd", 00:35:38.469 "config": [] 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "subsystem": "accel", 00:35:38.469 "config": [ 00:35:38.469 { 00:35:38.469 "method": "accel_set_options", 00:35:38.469 "params": { 00:35:38.469 "small_cache_size": 128, 00:35:38.469 "large_cache_size": 16, 00:35:38.469 "task_count": 2048, 00:35:38.469 "sequence_count": 2048, 00:35:38.469 "buf_count": 2048 00:35:38.469 } 00:35:38.469 } 00:35:38.469 ] 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "subsystem": "bdev", 00:35:38.469 "config": [ 00:35:38.469 { 00:35:38.469 "method": "bdev_set_options", 00:35:38.469 "params": { 00:35:38.469 "bdev_io_pool_size": 65535, 00:35:38.469 "bdev_io_cache_size": 256, 00:35:38.469 "bdev_auto_examine": true, 00:35:38.469 "iobuf_small_cache_size": 128, 00:35:38.469 "iobuf_large_cache_size": 16 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "bdev_raid_set_options", 00:35:38.469 "params": { 00:35:38.469 "process_window_size_kb": 1024 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "bdev_iscsi_set_options", 00:35:38.469 "params": { 00:35:38.469 "timeout_sec": 30 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "bdev_nvme_set_options", 00:35:38.469 "params": { 00:35:38.469 "action_on_timeout": "none", 00:35:38.469 "timeout_us": 0, 00:35:38.469 "timeout_admin_us": 0, 00:35:38.469 "keep_alive_timeout_ms": 10000, 00:35:38.469 "arbitration_burst": 0, 00:35:38.469 "low_priority_weight": 0, 00:35:38.469 "medium_priority_weight": 0, 00:35:38.469 "high_priority_weight": 0, 00:35:38.469 "nvme_adminq_poll_period_us": 10000, 00:35:38.469 "nvme_ioq_poll_period_us": 0, 00:35:38.469 "io_queue_requests": 0, 00:35:38.469 "delay_cmd_submit": true, 00:35:38.469 "transport_retry_count": 4, 00:35:38.469 "bdev_retry_count": 3, 00:35:38.469 "transport_ack_timeout": 0, 00:35:38.469 "ctrlr_loss_timeout_sec": 0, 00:35:38.469 "reconnect_delay_sec": 0, 00:35:38.469 "fast_io_fail_timeout_sec": 0, 00:35:38.469 "disable_auto_failback": false, 00:35:38.469 "generate_uuids": false, 00:35:38.469 "transport_tos": 0, 00:35:38.469 "nvme_error_stat": false, 00:35:38.469 "rdma_srq_size": 0, 00:35:38.469 "io_path_stat": false, 00:35:38.469 "allow_accel_sequence": false, 00:35:38.469 "rdma_max_cq_size": 0, 00:35:38.469 "rdma_cm_event_timeout_ms": 0, 00:35:38.469 "dhchap_digests": [ 00:35:38.469 "sha256", 00:35:38.469 "sha384", 00:35:38.469 "sha512" 00:35:38.469 ], 00:35:38.469 "dhchap_dhgroups": [ 00:35:38.469 "null", 00:35:38.469 "ffdhe2048", 00:35:38.469 "ffdhe3072", 00:35:38.469 "ffdhe4096", 00:35:38.469 "ffdhe6144", 00:35:38.469 "ffdhe8192" 00:35:38.469 ] 00:35:38.469 } 00:35:38.469 }, 00:35:38.469 { 00:35:38.469 "method": "bdev_nvme_set_hotplug", 00:35:38.469 "params": { 00:35:38.469 "period_us": 100000, 00:35:38.469 "enable": false 00:35:38.469 } 00:35:38.469 }, 00:35:38.470 { 00:35:38.470 "method": "bdev_malloc_create", 00:35:38.470 "params": { 00:35:38.470 "name": "malloc0", 00:35:38.470 "num_blocks": 8192, 00:35:38.470 "block_size": 4096, 00:35:38.470 "physical_block_size": 4096, 00:35:38.470 "uuid": "a4a86a32-862d-497c-8047-0763479fe88a", 00:35:38.470 "optimal_io_boundary": 0 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "bdev_wait_for_examine" 00:35:38.470 } 00:35:38.470 ] 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "subsystem": "nbd", 00:35:38.470 "config": [] 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "subsystem": "scheduler", 00:35:38.470 "config": [ 00:35:38.470 { 00:35:38.470 "method": "framework_set_scheduler", 00:35:38.470 "params": { 00:35:38.470 "name": "static" 00:35:38.470 } 00:35:38.470 } 00:35:38.470 ] 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "subsystem": "nvmf", 00:35:38.470 "config": [ 00:35:38.470 { 00:35:38.470 "method": "nvmf_set_config", 00:35:38.470 "params": { 00:35:38.470 "discovery_filter": "match_any", 00:35:38.470 "admin_cmd_passthru": { 00:35:38.470 "identify_ctrlr": false 00:35:38.470 } 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_set_max_subsystems", 00:35:38.470 "params": { 00:35:38.470 "max_subsystems": 1024 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_set_crdt", 00:35:38.470 "params": { 00:35:38.470 "crdt1": 0, 00:35:38.470 "crdt2": 0, 00:35:38.470 "crdt3": 0 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_create_transport", 00:35:38.470 "params": { 00:35:38.470 "trtype": "TCP", 00:35:38.470 "max_queue_depth": 128, 00:35:38.470 "max_io_qpairs_per_ctrlr": 127, 00:35:38.470 "in_capsule_data_size": 4096, 00:35:38.470 "max_io_size": 131072, 00:35:38.470 "io_unit_size": 131072, 00:35:38.470 "max_aq_depth": 128, 00:35:38.470 "num_shared_buffers": 511, 00:35:38.470 "buf_cache_size": 4294967295, 00:35:38.470 "dif_insert_or_strip": false, 00:35:38.470 "zcopy": false, 00:35:38.470 "c2h_success": false, 00:35:38.470 "sock_priority": 0, 00:35:38.470 "abort_timeout_sec": 1, 00:35:38.470 "ack_timeout": 0, 00:35:38.470 "data_wr_pool_size": 0 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_create_subsystem", 00:35:38.470 "params": { 00:35:38.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.470 "allow_any_host": false, 00:35:38.470 "serial_number": "00000000000000000000", 00:35:38.470 "model_number": "SPDK bdev Controller", 00:35:38.470 "max_namespaces": 32, 00:35:38.470 "min_cntlid": 1, 00:35:38.470 "max_cntlid": 65519, 00:35:38.470 "ana_reporting": false 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_subsystem_add_host", 00:35:38.470 "params": { 00:35:38.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.470 "host": "nqn.2016-06.io.spdk:host1", 00:35:38.470 "psk": "key0" 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_subsystem_add_ns", 00:35:38.470 "params": { 00:35:38.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.470 "namespace": { 00:35:38.470 "nsid": 1, 00:35:38.470 "bdev_name": "malloc0", 00:35:38.470 "nguid": "A4A86A32862D497C80470763479FE88A", 00:35:38.470 "uuid": "a4a86a32-862d-497c-8047-0763479fe88a", 00:35:38.470 "no_auto_visible": false 00:35:38.470 } 00:35:38.470 } 00:35:38.470 }, 00:35:38.470 { 00:35:38.470 "method": "nvmf_subsystem_add_listener", 00:35:38.470 "params": { 00:35:38.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.470 "listen_address": { 00:35:38.470 "trtype": "TCP", 00:35:38.470 "adrfam": "IPv4", 00:35:38.470 "traddr": "10.0.0.2", 00:35:38.470 "trsvcid": "4420" 00:35:38.470 }, 00:35:38.470 "secure_channel": true 00:35:38.470 } 00:35:38.470 } 00:35:38.470 ] 00:35:38.470 } 00:35:38.470 ] 00:35:38.470 }' 00:35:38.470 16:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:38.728 16:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:35:38.728 "subsystems": [ 00:35:38.728 { 00:35:38.728 "subsystem": "keyring", 00:35:38.728 "config": [ 00:35:38.728 { 00:35:38.728 "method": "keyring_file_add_key", 00:35:38.728 "params": { 00:35:38.728 "name": "key0", 00:35:38.728 "path": "/tmp/tmp.eSenNS7DP3" 00:35:38.728 } 00:35:38.728 } 00:35:38.728 ] 00:35:38.728 }, 00:35:38.728 { 00:35:38.728 "subsystem": "iobuf", 00:35:38.728 "config": [ 00:35:38.728 { 00:35:38.728 "method": "iobuf_set_options", 00:35:38.728 "params": { 00:35:38.729 "small_pool_count": 8192, 00:35:38.729 "large_pool_count": 1024, 00:35:38.729 "small_bufsize": 8192, 00:35:38.729 "large_bufsize": 135168 00:35:38.729 } 00:35:38.729 } 00:35:38.729 ] 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "subsystem": "sock", 00:35:38.729 "config": [ 00:35:38.729 { 00:35:38.729 "method": "sock_set_default_impl", 00:35:38.729 "params": { 00:35:38.729 "impl_name": "posix" 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "sock_impl_set_options", 00:35:38.729 "params": { 00:35:38.729 "impl_name": "ssl", 00:35:38.729 "recv_buf_size": 4096, 00:35:38.729 "send_buf_size": 4096, 00:35:38.729 "enable_recv_pipe": true, 00:35:38.729 "enable_quickack": false, 00:35:38.729 "enable_placement_id": 0, 00:35:38.729 "enable_zerocopy_send_server": true, 00:35:38.729 "enable_zerocopy_send_client": false, 00:35:38.729 "zerocopy_threshold": 0, 00:35:38.729 "tls_version": 0, 00:35:38.729 "enable_ktls": false 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "sock_impl_set_options", 00:35:38.729 "params": { 00:35:38.729 "impl_name": "posix", 00:35:38.729 "recv_buf_size": 2097152, 00:35:38.729 "send_buf_size": 2097152, 00:35:38.729 "enable_recv_pipe": true, 00:35:38.729 "enable_quickack": false, 00:35:38.729 "enable_placement_id": 0, 00:35:38.729 "enable_zerocopy_send_server": true, 00:35:38.729 "enable_zerocopy_send_client": false, 00:35:38.729 "zerocopy_threshold": 0, 00:35:38.729 "tls_version": 0, 00:35:38.729 "enable_ktls": false 00:35:38.729 } 00:35:38.729 } 00:35:38.729 ] 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "subsystem": "vmd", 00:35:38.729 "config": [] 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "subsystem": "accel", 00:35:38.729 "config": [ 00:35:38.729 { 00:35:38.729 "method": "accel_set_options", 00:35:38.729 "params": { 00:35:38.729 "small_cache_size": 128, 00:35:38.729 "large_cache_size": 16, 00:35:38.729 "task_count": 2048, 00:35:38.729 "sequence_count": 2048, 00:35:38.729 "buf_count": 2048 00:35:38.729 } 00:35:38.729 } 00:35:38.729 ] 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "subsystem": "bdev", 00:35:38.729 "config": [ 00:35:38.729 { 00:35:38.729 "method": "bdev_set_options", 00:35:38.729 "params": { 00:35:38.729 "bdev_io_pool_size": 65535, 00:35:38.729 "bdev_io_cache_size": 256, 00:35:38.729 "bdev_auto_examine": true, 00:35:38.729 "iobuf_small_cache_size": 128, 00:35:38.729 "iobuf_large_cache_size": 16 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_raid_set_options", 00:35:38.729 "params": { 00:35:38.729 "process_window_size_kb": 1024 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_iscsi_set_options", 00:35:38.729 "params": { 00:35:38.729 "timeout_sec": 30 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_nvme_set_options", 00:35:38.729 "params": { 00:35:38.729 "action_on_timeout": "none", 00:35:38.729 "timeout_us": 0, 00:35:38.729 "timeout_admin_us": 0, 00:35:38.729 "keep_alive_timeout_ms": 10000, 00:35:38.729 "arbitration_burst": 0, 00:35:38.729 "low_priority_weight": 0, 00:35:38.729 "medium_priority_weight": 0, 00:35:38.729 "high_priority_weight": 0, 00:35:38.729 "nvme_adminq_poll_period_us": 10000, 00:35:38.729 "nvme_ioq_poll_period_us": 0, 00:35:38.729 "io_queue_requests": 512, 00:35:38.729 "delay_cmd_submit": true, 00:35:38.729 "transport_retry_count": 4, 00:35:38.729 "bdev_retry_count": 3, 00:35:38.729 "transport_ack_timeout": 0, 00:35:38.729 "ctrlr_loss_timeout_sec": 0, 00:35:38.729 "reconnect_delay_sec": 0, 00:35:38.729 "fast_io_fail_timeout_sec": 0, 00:35:38.729 "disable_auto_failback": false, 00:35:38.729 "generate_uuids": false, 00:35:38.729 "transport_tos": 0, 00:35:38.729 "nvme_error_stat": false, 00:35:38.729 "rdma_srq_size": 0, 00:35:38.729 "io_path_stat": false, 00:35:38.729 "allow_accel_sequence": false, 00:35:38.729 "rdma_max_cq_size": 0, 00:35:38.729 "rdma_cm_event_timeout_ms": 0, 00:35:38.729 "dhchap_digests": [ 00:35:38.729 "sha256", 00:35:38.729 "sha384", 00:35:38.729 "sha512" 00:35:38.729 ], 00:35:38.729 "dhchap_dhgroups": [ 00:35:38.729 "null", 00:35:38.729 "ffdhe2048", 00:35:38.729 "ffdhe3072", 00:35:38.729 "ffdhe4096", 00:35:38.729 "ffdhe6144", 00:35:38.729 "ffdhe8192" 00:35:38.729 ] 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_nvme_attach_controller", 00:35:38.729 "params": { 00:35:38.729 "name": "nvme0", 00:35:38.729 "trtype": "TCP", 00:35:38.729 "adrfam": "IPv4", 00:35:38.729 "traddr": "10.0.0.2", 00:35:38.729 "trsvcid": "4420", 00:35:38.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.729 "prchk_reftag": false, 00:35:38.729 "prchk_guard": false, 00:35:38.729 "ctrlr_loss_timeout_sec": 0, 00:35:38.729 "reconnect_delay_sec": 0, 00:35:38.729 "fast_io_fail_timeout_sec": 0, 00:35:38.729 "psk": "key0", 00:35:38.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.729 "hdgst": false, 00:35:38.729 "ddgst": false 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_nvme_set_hotplug", 00:35:38.729 "params": { 00:35:38.729 "period_us": 100000, 00:35:38.729 "enable": false 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_enable_histogram", 00:35:38.729 "params": { 00:35:38.729 "name": "nvme0n1", 00:35:38.729 "enable": true 00:35:38.729 } 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "method": "bdev_wait_for_examine" 00:35:38.729 } 00:35:38.729 ] 00:35:38.729 }, 00:35:38.729 { 00:35:38.729 "subsystem": "nbd", 00:35:38.729 "config": [] 00:35:38.729 } 00:35:38.729 ] 00:35:38.729 }' 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2842966 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2842966 ']' 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2842966 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2842966 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2842966' 00:35:38.729 killing process with pid 2842966 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2842966 00:35:38.729 Received shutdown signal, test time was about 1.000000 seconds 00:35:38.729 00:35:38.729 Latency(us) 00:35:38.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.729 =================================================================================================================== 00:35:38.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:38.729 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2842966 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2842856 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2842856 ']' 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2842856 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2842856 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2842856' 00:35:38.988 killing process with pid 2842856 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2842856 00:35:38.988 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2842856 00:35:39.246 16:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:35:39.246 16:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:39.246 16:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:35:39.246 "subsystems": [ 00:35:39.246 { 00:35:39.246 "subsystem": "keyring", 00:35:39.246 "config": [ 00:35:39.246 { 00:35:39.246 "method": "keyring_file_add_key", 00:35:39.246 "params": { 00:35:39.246 "name": "key0", 00:35:39.246 "path": "/tmp/tmp.eSenNS7DP3" 00:35:39.246 } 00:35:39.246 } 00:35:39.246 ] 00:35:39.246 }, 00:35:39.246 { 00:35:39.246 "subsystem": "iobuf", 00:35:39.246 "config": [ 00:35:39.246 { 00:35:39.246 "method": "iobuf_set_options", 00:35:39.246 "params": { 00:35:39.246 "small_pool_count": 8192, 00:35:39.246 "large_pool_count": 1024, 00:35:39.246 "small_bufsize": 8192, 00:35:39.246 "large_bufsize": 135168 00:35:39.246 } 00:35:39.246 } 00:35:39.246 ] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "sock", 00:35:39.247 "config": [ 00:35:39.247 { 00:35:39.247 "method": "sock_set_default_impl", 00:35:39.247 "params": { 00:35:39.247 "impl_name": "posix" 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "sock_impl_set_options", 00:35:39.247 "params": { 00:35:39.247 "impl_name": "ssl", 00:35:39.247 "recv_buf_size": 4096, 00:35:39.247 "send_buf_size": 4096, 00:35:39.247 "enable_recv_pipe": true, 00:35:39.247 "enable_quickack": false, 00:35:39.247 "enable_placement_id": 0, 00:35:39.247 "enable_zerocopy_send_server": true, 00:35:39.247 "enable_zerocopy_send_client": false, 00:35:39.247 "zerocopy_threshold": 0, 00:35:39.247 "tls_version": 0, 00:35:39.247 "enable_ktls": false 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "sock_impl_set_options", 00:35:39.247 "params": { 00:35:39.247 "impl_name": "posix", 00:35:39.247 "recv_buf_size": 2097152, 00:35:39.247 "send_buf_size": 2097152, 00:35:39.247 "enable_recv_pipe": true, 00:35:39.247 "enable_quickack": false, 00:35:39.247 "enable_placement_id": 0, 00:35:39.247 "enable_zerocopy_send_server": true, 00:35:39.247 "enable_zerocopy_send_client": false, 00:35:39.247 "zerocopy_threshold": 0, 00:35:39.247 "tls_version": 0, 00:35:39.247 "enable_ktls": false 00:35:39.247 } 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "vmd", 00:35:39.247 "config": [] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "accel", 00:35:39.247 "config": [ 00:35:39.247 { 00:35:39.247 "method": "accel_set_options", 00:35:39.247 "params": { 00:35:39.247 "small_cache_size": 128, 00:35:39.247 "large_cache_size": 16, 00:35:39.247 "task_count": 2048, 00:35:39.247 "sequence_count": 2048, 00:35:39.247 "buf_count": 2048 00:35:39.247 } 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "bdev", 00:35:39.247 "config": [ 00:35:39.247 { 00:35:39.247 "method": "bdev_set_options", 00:35:39.247 "params": { 00:35:39.247 "bdev_io_pool_size": 65535, 00:35:39.247 "bdev_io_cache_size": 256, 00:35:39.247 "bdev_auto_examine": true, 00:35:39.247 "iobuf_small_cache_size": 128, 00:35:39.247 "iobuf_large_cache_size": 16 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_raid_set_options", 00:35:39.247 "params": { 00:35:39.247 "process_window_size_kb": 1024 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_iscsi_set_options", 00:35:39.247 "params": { 00:35:39.247 "timeout_sec": 30 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_nvme_set_options", 00:35:39.247 "params": { 00:35:39.247 "action_on_timeout": "none", 00:35:39.247 "timeout_us": 0, 00:35:39.247 "timeout_admin_us": 0, 00:35:39.247 "keep_alive_timeout_ms": 10000, 00:35:39.247 "arbitration_burst": 0, 00:35:39.247 "low_priority_weight": 0, 00:35:39.247 "medium_priority_weight": 0, 00:35:39.247 "high_priority_weight": 0, 00:35:39.247 "nvme_adminq_poll_period_us": 10000, 00:35:39.247 "nvme_ioq_poll_period_us": 0, 00:35:39.247 "io_queue_requests": 0, 00:35:39.247 "delay_cmd_submit": true, 00:35:39.247 "transport_retry_count": 4, 00:35:39.247 "bdev_retry_count": 3, 00:35:39.247 "transport_ack_timeout": 0, 00:35:39.247 "ctrlr_loss_timeout_sec": 0, 00:35:39.247 "reconnect_delay_sec": 0, 00:35:39.247 "fast_io_fail_timeout_sec": 0, 00:35:39.247 "disable_auto_failback": false, 00:35:39.247 "generate_uuids": false, 00:35:39.247 "transport_tos": 0, 00:35:39.247 "nvme_error_stat": false, 00:35:39.247 "rdma_srq_size": 0, 00:35:39.247 "io_path_stat": false, 00:35:39.247 "allow_accel_sequence": false, 00:35:39.247 "rdma_max_cq_size": 0, 00:35:39.247 "rdma_cm_event_timeout_ms": 0, 00:35:39.247 "dhchap_digests": [ 00:35:39.247 "sha256", 00:35:39.247 "sha384", 00:35:39.247 "sha512" 00:35:39.247 ], 00:35:39.247 "dhchap_dhgroups": [ 00:35:39.247 "null", 00:35:39.247 "ffdhe2048", 00:35:39.247 "ffdhe3072", 00:35:39.247 "ffdhe4096", 00:35:39.247 "ffdhe6144", 00:35:39.247 "ffdhe8192" 00:35:39.247 ] 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_nvme_set_hotplug", 00:35:39.247 "params": { 00:35:39.247 "period_us": 100000, 00:35:39.247 "enable": false 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_malloc_create", 00:35:39.247 "params": { 00:35:39.247 "name": "malloc0", 00:35:39.247 "num_blocks": 8192, 00:35:39.247 "block_size": 4096, 00:35:39.247 "physical_block_size": 4096, 00:35:39.247 "uuid": "a4a86a32-862d-497c-8047-0763479fe88a", 00:35:39.247 "optimal_io_boundary": 0 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "bdev_wait_for_examine" 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "nbd", 00:35:39.247 "config": [] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "scheduler", 00:35:39.247 "config": [ 00:35:39.247 { 00:35:39.247 "method": "framework_set_scheduler", 00:35:39.247 "params": { 00:35:39.247 "name": "static" 00:35:39.247 } 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "subsystem": "nvmf", 00:35:39.247 "config": [ 00:35:39.247 { 00:35:39.247 "method": "nvmf_set_config", 00:35:39.247 "params": { 00:35:39.247 "discovery_filter": "match_any", 00:35:39.247 "admin_cmd_passthru": { 00:35:39.247 "identify_ctrlr": false 00:35:39.247 } 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_set_max_subsystems", 00:35:39.247 "params": { 00:35:39.247 "max_subsystems": 1024 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_set_crdt", 00:35:39.247 "params": { 00:35:39.247 "crdt1": 0, 00:35:39.247 "crdt2": 0, 00:35:39.247 "crdt3": 0 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_create_transport", 00:35:39.247 "params": { 00:35:39.247 "trtype": "TCP", 00:35:39.247 "max_queue_depth": 128, 00:35:39.247 "max_io_qpairs_per_ctrlr": 127, 00:35:39.247 "in_capsule_data_size": 4096, 00:35:39.247 "max_io_size": 131072, 00:35:39.247 "io_unit_size": 131072, 00:35:39.247 "max_aq_depth": 128, 00:35:39.247 "num_shared_buffers": 511, 00:35:39.247 "buf_cache_size": 4294967295, 00:35:39.247 "dif_insert_or_strip": false, 00:35:39.247 "zcopy": false, 00:35:39.247 "c2h_success": false, 00:35:39.247 "sock_priority": 0, 00:35:39.247 "abort_timeout_sec": 1, 00:35:39.247 "ack_timeout": 0, 00:35:39.247 "data_wr_pool_size": 0 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_create_subsystem", 00:35:39.247 "params": { 00:35:39.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.247 "allow_any_host": false, 00:35:39.247 "serial_number": "00000000000000000000", 00:35:39.247 "model_number": "SPDK bdev Controller", 00:35:39.247 "max_namespaces": 32, 00:35:39.247 "min_cntlid": 1, 00:35:39.247 "max_cntlid": 65519, 00:35:39.247 "ana_reporting": false 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_subsystem_add_host", 00:35:39.247 "params": { 00:35:39.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.247 "host": "nqn.2016-06.io.spdk:host1", 00:35:39.247 "psk": "key0" 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_subsystem_add_ns", 00:35:39.247 "params": { 00:35:39.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.247 "namespace": { 00:35:39.247 "nsid": 1, 00:35:39.247 "bdev_name": "malloc0", 00:35:39.247 "nguid": "A4A86A32862D497C80470763479FE88A", 00:35:39.247 "uuid": "a4a86a32-862d-497c-8047-0763479fe88a", 00:35:39.247 "no_auto_visible": false 00:35:39.247 } 00:35:39.247 } 00:35:39.247 }, 00:35:39.247 { 00:35:39.247 "method": "nvmf_subsystem_add_listener", 00:35:39.247 "params": { 00:35:39.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.247 "listen_address": { 00:35:39.247 "trtype": "TCP", 00:35:39.247 "adrfam": "IPv4", 00:35:39.247 "traddr": "10.0.0.2", 00:35:39.247 "trsvcid": "4420" 00:35:39.247 }, 00:35:39.247 "secure_channel": true 00:35:39.247 } 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 } 00:35:39.247 ] 00:35:39.247 }' 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2843282 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2843282 00:35:39.247 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2843282 ']' 00:35:39.248 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.248 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:39.248 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.248 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:39.248 16:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:39.248 [2024-07-22 16:48:58.716861] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:39.248 [2024-07-22 16:48:58.716939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.248 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.248 [2024-07-22 16:48:58.789448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.248 [2024-07-22 16:48:58.880504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.248 [2024-07-22 16:48:58.880575] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.248 [2024-07-22 16:48:58.880589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.248 [2024-07-22 16:48:58.880608] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.248 [2024-07-22 16:48:58.880618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.248 [2024-07-22 16:48:58.880706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.506 [2024-07-22 16:48:59.113537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.506 [2024-07-22 16:48:59.145564] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:39.506 [2024-07-22 16:48:59.155149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2843434 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2843434 /var/tmp/bdevperf.sock 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2843434 ']' 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:40.072 16:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:35:40.072 "subsystems": [ 00:35:40.072 { 00:35:40.072 "subsystem": "keyring", 00:35:40.072 "config": [ 00:35:40.072 { 00:35:40.072 "method": "keyring_file_add_key", 00:35:40.072 "params": { 00:35:40.072 "name": "key0", 00:35:40.072 "path": "/tmp/tmp.eSenNS7DP3" 00:35:40.072 } 00:35:40.072 } 00:35:40.072 ] 00:35:40.072 }, 00:35:40.072 { 00:35:40.072 "subsystem": "iobuf", 00:35:40.072 "config": [ 00:35:40.072 { 00:35:40.072 "method": "iobuf_set_options", 00:35:40.072 "params": { 00:35:40.072 "small_pool_count": 8192, 00:35:40.072 "large_pool_count": 1024, 00:35:40.072 "small_bufsize": 8192, 00:35:40.072 "large_bufsize": 135168 00:35:40.072 } 00:35:40.072 } 00:35:40.072 ] 00:35:40.072 }, 00:35:40.072 { 00:35:40.072 "subsystem": "sock", 00:35:40.072 "config": [ 00:35:40.072 { 00:35:40.072 "method": "sock_set_default_impl", 00:35:40.072 "params": { 00:35:40.072 "impl_name": "posix" 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "sock_impl_set_options", 00:35:40.073 "params": { 00:35:40.073 "impl_name": "ssl", 00:35:40.073 "recv_buf_size": 4096, 00:35:40.073 "send_buf_size": 4096, 00:35:40.073 "enable_recv_pipe": true, 00:35:40.073 "enable_quickack": false, 00:35:40.073 "enable_placement_id": 0, 00:35:40.073 "enable_zerocopy_send_server": true, 00:35:40.073 "enable_zerocopy_send_client": false, 00:35:40.073 "zerocopy_threshold": 0, 00:35:40.073 "tls_version": 0, 00:35:40.073 "enable_ktls": false 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "sock_impl_set_options", 00:35:40.073 "params": { 00:35:40.073 "impl_name": "posix", 00:35:40.073 "recv_buf_size": 2097152, 00:35:40.073 "send_buf_size": 2097152, 00:35:40.073 "enable_recv_pipe": true, 00:35:40.073 "enable_quickack": false, 00:35:40.073 "enable_placement_id": 0, 00:35:40.073 "enable_zerocopy_send_server": true, 00:35:40.073 "enable_zerocopy_send_client": false, 00:35:40.073 "zerocopy_threshold": 0, 00:35:40.073 "tls_version": 0, 00:35:40.073 "enable_ktls": false 00:35:40.073 } 00:35:40.073 } 00:35:40.073 ] 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "subsystem": "vmd", 00:35:40.073 "config": [] 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "subsystem": "accel", 00:35:40.073 "config": [ 00:35:40.073 { 00:35:40.073 "method": "accel_set_options", 00:35:40.073 "params": { 00:35:40.073 "small_cache_size": 128, 00:35:40.073 "large_cache_size": 16, 00:35:40.073 "task_count": 2048, 00:35:40.073 "sequence_count": 2048, 00:35:40.073 "buf_count": 2048 00:35:40.073 } 00:35:40.073 } 00:35:40.073 ] 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "subsystem": "bdev", 00:35:40.073 "config": [ 00:35:40.073 { 00:35:40.073 "method": "bdev_set_options", 00:35:40.073 "params": { 00:35:40.073 "bdev_io_pool_size": 65535, 00:35:40.073 "bdev_io_cache_size": 256, 00:35:40.073 "bdev_auto_examine": true, 00:35:40.073 "iobuf_small_cache_size": 128, 00:35:40.073 "iobuf_large_cache_size": 16 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_raid_set_options", 00:35:40.073 "params": { 00:35:40.073 "process_window_size_kb": 1024 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_iscsi_set_options", 00:35:40.073 "params": { 00:35:40.073 "timeout_sec": 30 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_nvme_set_options", 00:35:40.073 "params": { 00:35:40.073 "action_on_timeout": "none", 00:35:40.073 "timeout_us": 0, 00:35:40.073 "timeout_admin_us": 0, 00:35:40.073 "keep_alive_timeout_ms": 10000, 00:35:40.073 "arbitration_burst": 0, 00:35:40.073 "low_priority_weight": 0, 00:35:40.073 "medium_priority_weight": 0, 00:35:40.073 "high_priority_weight": 0, 00:35:40.073 "nvme_adminq_poll_period_us": 10000, 00:35:40.073 "nvme_ioq_poll_period_us": 0, 00:35:40.073 "io_queue_requests": 512, 00:35:40.073 "delay_cmd_submit": true, 00:35:40.073 "transport_retry_count": 4, 00:35:40.073 "bdev_retry_count": 3, 00:35:40.073 "transport_ack_timeout": 0, 00:35:40.073 "ctrlr_loss_timeout_sec": 0, 00:35:40.073 "reconnect_delay_sec": 0, 00:35:40.073 "fast_io_fail_timeout_sec": 0, 00:35:40.073 "disable_auto_failback": false, 00:35:40.073 "generate_uuids": false, 00:35:40.073 "transport_tos": 0, 00:35:40.073 "nvme_error_stat": false, 00:35:40.073 "rdma_srq_size": 0, 00:35:40.073 "io_path_stat": false, 00:35:40.073 "allow_accel_sequence": false, 00:35:40.073 "rdma_max_cq_size": 0, 00:35:40.073 "rdma_cm_event_timeout_ms": 0, 00:35:40.073 "dhchap_digests": [ 00:35:40.073 "sha256", 00:35:40.073 "sha384", 00:35:40.073 "sha512" 00:35:40.073 ], 00:35:40.073 "dhchap_dhgroups": [ 00:35:40.073 "null", 00:35:40.073 "ffdhe2048", 00:35:40.073 "ffdhe3072", 00:35:40.073 "ffdhe4096", 00:35:40.073 "ffdhe6144", 00:35:40.073 "ffdhe8192" 00:35:40.073 ] 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_nvme_attach_controller", 00:35:40.073 "params": { 00:35:40.073 "name": "nvme0", 00:35:40.073 "trtype": "TCP", 00:35:40.073 "adrfam": "IPv4", 00:35:40.073 "traddr": "10.0.0.2", 00:35:40.073 "trsvcid": "4420", 00:35:40.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.073 "prchk_reftag": false, 00:35:40.073 "prchk_guard": false, 00:35:40.073 "ctrlr_loss_timeout_sec": 0, 00:35:40.073 "reconnect_delay_sec": 0, 00:35:40.073 "fast_io_fail_timeout_sec": 0, 00:35:40.073 "psk": "key0", 00:35:40.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.073 "hdgst": false, 00:35:40.073 "ddgst": false 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_nvme_set_hotplug", 00:35:40.073 "params": { 00:35:40.073 "period_us": 100000, 00:35:40.073 "enable": false 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_enable_histogram", 00:35:40.073 "params": { 00:35:40.073 "name": "nvme0n1", 00:35:40.073 "enable": true 00:35:40.073 } 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "method": "bdev_wait_for_examine" 00:35:40.073 } 00:35:40.073 ] 00:35:40.073 }, 00:35:40.073 { 00:35:40.073 "subsystem": "nbd", 00:35:40.073 "config": [] 00:35:40.073 } 00:35:40.073 ] 00:35:40.073 }' 00:35:40.073 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:40.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:40.073 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:40.073 16:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:40.332 [2024-07-22 16:48:59.748113] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:40.332 [2024-07-22 16:48:59.748207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843434 ] 00:35:40.332 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.332 [2024-07-22 16:48:59.820634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.332 [2024-07-22 16:48:59.911174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.590 [2024-07-22 16:49:00.092576] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:41.155 16:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:41.155 16:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:35:41.155 16:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:41.155 16:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:35:41.413 16:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.413 16:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:41.413 Running I/O for 1 seconds... 00:35:42.787 00:35:42.787 Latency(us) 00:35:42.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:42.787 Verification LBA range: start 0x0 length 0x2000 00:35:42.787 nvme0n1 : 1.02 2813.78 10.99 0.00 0.00 44986.86 6844.87 93206.76 00:35:42.787 =================================================================================================================== 00:35:42.787 Total : 2813.78 10.99 0.00 0.00 44986.86 6844.87 93206.76 00:35:42.787 0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:42.787 nvmf_trace.0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2843434 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2843434 ']' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2843434 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2843434 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2843434' 00:35:42.787 killing process with pid 2843434 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2843434 00:35:42.787 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.787 00:35:42.787 Latency(us) 00:35:42.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.787 =================================================================================================================== 00:35:42.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2843434 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:42.787 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:42.787 rmmod nvme_tcp 00:35:42.787 rmmod nvme_fabrics 00:35:43.045 rmmod nvme_keyring 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2843282 ']' 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2843282 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2843282 ']' 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2843282 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2843282 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2843282' 00:35:43.045 killing process with pid 2843282 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2843282 00:35:43.045 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2843282 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:43.303 16:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.204 16:49:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:45.204 16:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ELzf0J11IK /tmp/tmp.ZQjSJVYsLe /tmp/tmp.eSenNS7DP3 00:35:45.204 00:35:45.204 real 1m19.428s 00:35:45.204 user 2m4.606s 00:35:45.204 sys 0m29.919s 00:35:45.204 16:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:45.204 16:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:45.204 ************************************ 00:35:45.204 END TEST nvmf_tls 00:35:45.204 ************************************ 00:35:45.204 16:49:04 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:45.204 16:49:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:45.204 16:49:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:45.204 16:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.204 ************************************ 00:35:45.204 START TEST nvmf_fips 00:35:45.204 ************************************ 00:35:45.204 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:45.463 * Looking for test storage... 00:35:45.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:35:45.463 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:35:45.464 Error setting digest 00:35:45.464 00F20C1D8A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:35:45.464 00F20C1D8A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:35:45.464 16:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:35:47.994 Found 0000:82:00.0 (0x8086 - 0x159b) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:35:47.994 Found 0000:82:00.1 (0x8086 - 0x159b) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:35:47.994 Found net devices under 0000:82:00.0: cvl_0_0 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:35:47.994 Found net devices under 0000:82:00.1: cvl_0_1 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.994 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:47.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:35:47.995 00:35:47.995 --- 10.0.0.2 ping statistics --- 00:35:47.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.995 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:35:47.995 00:35:47.995 --- 10.0.0.1 ping statistics --- 00:35:47.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.995 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:47.995 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2846086 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2846086 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2846086 ']' 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:48.253 16:49:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:48.253 [2024-07-22 16:49:07.732987] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:48.253 [2024-07-22 16:49:07.733081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.253 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.253 [2024-07-22 16:49:07.809561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.253 [2024-07-22 16:49:07.900235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.253 [2024-07-22 16:49:07.900285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.253 [2024-07-22 16:49:07.900316] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.253 [2024-07-22 16:49:07.900328] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.253 [2024-07-22 16:49:07.900339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.253 [2024-07-22 16:49:07.900387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:49.187 16:49:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:49.445 [2024-07-22 16:49:08.939834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.445 [2024-07-22 16:49:08.955844] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:49.445 [2024-07-22 16:49:08.956054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.445 [2024-07-22 16:49:08.988402] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:49.445 malloc0 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2846242 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2846242 /var/tmp/bdevperf.sock 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2846242 ']' 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:49.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:49.445 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:49.445 [2024-07-22 16:49:09.079710] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:49.445 [2024-07-22 16:49:09.079804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846242 ] 00:35:49.703 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.703 [2024-07-22 16:49:09.147827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.703 [2024-07-22 16:49:09.231127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:49.703 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:49.703 16:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:35:49.703 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:49.961 [2024-07-22 16:49:09.602743] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:49.961 [2024-07-22 16:49:09.602865] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:50.219 TLSTESTn1 00:35:50.219 16:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:50.219 Running I/O for 10 seconds... 00:36:00.288 00:36:00.288 Latency(us) 00:36:00.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.288 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:00.288 Verification LBA range: start 0x0 length 0x2000 00:36:00.288 TLSTESTn1 : 10.02 3703.25 14.47 0.00 0.00 34503.46 9806.13 42137.22 00:36:00.288 =================================================================================================================== 00:36:00.288 Total : 3703.25 14.47 0.00 0.00 34503.46 9806.13 42137.22 00:36:00.288 0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:00.288 nvmf_trace.0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2846242 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2846242 ']' 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2846242 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:00.288 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2846242 00:36:00.546 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:36:00.546 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:36:00.546 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2846242' 00:36:00.546 killing process with pid 2846242 00:36:00.546 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2846242 00:36:00.546 Received shutdown signal, test time was about 10.000000 seconds 00:36:00.546 00:36:00.546 Latency(us) 00:36:00.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.546 =================================================================================================================== 00:36:00.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.546 [2024-07-22 16:49:19.942596] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:36:00.546 16:49:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2846242 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:00.546 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:00.546 rmmod nvme_tcp 00:36:00.546 rmmod nvme_fabrics 00:36:00.807 rmmod nvme_keyring 00:36:00.807 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:00.807 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:36:00.807 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:36:00.807 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2846086 ']' 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2846086 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2846086 ']' 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2846086 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2846086 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2846086' 00:36:00.808 killing process with pid 2846086 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2846086 00:36:00.808 [2024-07-22 16:49:20.261224] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:00.808 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2846086 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:01.065 16:49:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.965 16:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:02.965 16:49:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:02.965 00:36:02.965 real 0m17.749s 00:36:02.965 user 0m21.333s 00:36:02.965 sys 0m7.019s 00:36:02.965 16:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.965 16:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:02.965 ************************************ 00:36:02.965 END TEST nvmf_fips 00:36:02.965 ************************************ 00:36:02.965 16:49:22 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:36:02.965 16:49:22 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:36:02.965 16:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:02.965 16:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:02.965 16:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.965 ************************************ 00:36:02.965 START TEST nvmf_fuzz 00:36:02.965 ************************************ 00:36:02.965 16:49:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:36:03.223 * Looking for test storage... 00:36:03.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.223 16:49:22 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:36:03.224 16:49:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:36:05.755 Found 0000:82:00.0 (0x8086 - 0x159b) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:36:05.755 Found 0000:82:00.1 (0x8086 - 0x159b) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:36:05.755 Found net devices under 0000:82:00.0: cvl_0_0 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:36:05.755 Found net devices under 0000:82:00.1: cvl_0_1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:05.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:36:05.755 00:36:05.755 --- 10.0.0.2 ping statistics --- 00:36:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.755 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:36:05.755 00:36:05.755 --- 10.0.0.1 ping statistics --- 00:36:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.755 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2849900 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2849900 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2849900 ']' 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.755 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:05.756 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 Malloc0 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:36:06.014 16:49:25 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:36:38.081 Fuzzing completed. Shutting down the fuzz application 00:36:38.081 00:36:38.081 Dumping successful admin opcodes: 00:36:38.081 8, 9, 10, 24, 00:36:38.081 Dumping successful io opcodes: 00:36:38.081 0, 9, 00:36:38.081 NS: 0x200003aeff00 I/O qp, Total commands completed: 465648, total successful commands: 2691, random_seed: 4140842560 00:36:38.081 NS: 0x200003aeff00 admin qp, Total commands completed: 57664, total successful commands: 462, random_seed: 301673024 00:36:38.081 16:49:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:36:38.081 Fuzzing completed. Shutting down the fuzz application 00:36:38.081 00:36:38.081 Dumping successful admin opcodes: 00:36:38.081 24, 00:36:38.081 Dumping successful io opcodes: 00:36:38.081 00:36:38.081 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3103988757 00:36:38.081 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3104120017 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:38.081 rmmod nvme_tcp 00:36:38.081 rmmod nvme_fabrics 00:36:38.081 rmmod nvme_keyring 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2849900 ']' 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2849900 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2849900 ']' 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 2849900 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2849900 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2849900' 00:36:38.081 killing process with pid 2849900 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 2849900 00:36:38.081 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 2849900 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:38.340 16:49:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.239 16:49:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:40.239 16:49:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:36:40.239 00:36:40.240 real 0m37.241s 00:36:40.240 user 0m50.596s 00:36:40.240 sys 0m16.195s 00:36:40.240 16:49:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:40.240 16:49:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 ************************************ 00:36:40.240 END TEST nvmf_fuzz 00:36:40.240 ************************************ 00:36:40.240 16:49:59 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:40.240 16:49:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:40.240 16:49:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:40.240 16:49:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:40.498 ************************************ 00:36:40.498 START TEST nvmf_multiconnection 00:36:40.498 ************************************ 00:36:40.498 16:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:40.498 * Looking for test storage... 00:36:40.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:36:40.499 16:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:36:43.028 Found 0000:82:00.0 (0x8086 - 0x159b) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:36:43.028 Found 0000:82:00.1 (0x8086 - 0x159b) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:36:43.028 Found net devices under 0000:82:00.0: cvl_0_0 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:36:43.028 Found net devices under 0000:82:00.1: cvl_0_1 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.028 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:43.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:36:43.029 00:36:43.029 --- 10.0.0.2 ping statistics --- 00:36:43.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.029 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:36:43.029 00:36:43.029 --- 10.0.0.1 ping statistics --- 00:36:43.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.029 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2855792 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2855792 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 2855792 ']' 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:43.029 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.287 [2024-07-22 16:50:02.694498] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:43.287 [2024-07-22 16:50:02.694586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.287 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.287 [2024-07-22 16:50:02.779091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:43.287 [2024-07-22 16:50:02.875136] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.287 [2024-07-22 16:50:02.875192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.287 [2024-07-22 16:50:02.875207] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.287 [2024-07-22 16:50:02.875218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.287 [2024-07-22 16:50:02.875228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.287 [2024-07-22 16:50:02.875316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.287 [2024-07-22 16:50:02.876348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:43.287 [2024-07-22 16:50:02.876401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:43.287 [2024-07-22 16:50:02.876404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.544 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:43.544 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:36:43.544 16:50:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:43.544 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.544 16:50:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 [2024-07-22 16:50:03.017497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 Malloc1 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 [2024-07-22 16:50:03.072335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 Malloc2 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 Malloc3 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.544 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 Malloc4 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 Malloc5 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 Malloc6 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 Malloc7 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.803 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 Malloc8 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 Malloc9 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:36:43.804 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 Malloc10 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 Malloc11 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:44.062 16:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:44.628 16:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:36:44.628 16:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:44.628 16:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:44.628 16:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:44.628 16:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:36:46.525 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:36:46.526 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:36:46.526 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:36:46.783 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:36:46.783 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:36:46.783 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:36:46.783 16:50:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:46.783 16:50:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:36:47.348 16:50:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:36:47.348 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:47.348 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:47.348 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:47.348 16:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:49.241 16:50:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:36:50.173 16:50:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:36:50.173 16:50:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:50.173 16:50:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:50.173 16:50:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:50.173 16:50:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:52.066 16:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:36:52.998 16:50:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:36:52.998 16:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:52.998 16:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:52.998 16:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:52.998 16:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:36:54.893 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:54.894 16:50:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:36:55.826 16:50:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:36:55.826 16:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:55.826 16:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:55.826 16:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:55.826 16:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:57.723 16:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:36:58.656 16:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:36:58.656 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:36:58.656 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:36:58.656 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:36:58.656 16:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:00.553 16:50:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:37:01.487 16:50:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:37:01.487 16:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:37:01.487 16:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:01.487 16:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:01.487 16:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:03.384 16:50:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:37:03.947 16:50:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:37:03.947 16:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:37:03.947 16:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:03.947 16:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:03.947 16:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:06.473 16:50:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:37:07.039 16:50:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:37:07.039 16:50:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:37:07.039 16:50:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:07.039 16:50:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:07.039 16:50:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:09.018 16:50:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:37:09.952 16:50:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:37:09.952 16:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:37:09.952 16:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:09.952 16:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:09.952 16:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:11.850 16:50:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:37:12.783 16:50:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:37:12.783 16:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:37:12.783 16:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:12.783 16:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:12.783 16:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:37:14.679 16:50:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:37:14.679 [global] 00:37:14.679 thread=1 00:37:14.679 invalidate=1 00:37:14.679 rw=read 00:37:14.679 time_based=1 00:37:14.679 runtime=10 00:37:14.679 ioengine=libaio 00:37:14.679 direct=1 00:37:14.679 bs=262144 00:37:14.679 iodepth=64 00:37:14.679 norandommap=1 00:37:14.679 numjobs=1 00:37:14.679 00:37:14.679 [job0] 00:37:14.679 filename=/dev/nvme0n1 00:37:14.679 [job1] 00:37:14.679 filename=/dev/nvme10n1 00:37:14.679 [job2] 00:37:14.679 filename=/dev/nvme1n1 00:37:14.679 [job3] 00:37:14.679 filename=/dev/nvme2n1 00:37:14.679 [job4] 00:37:14.679 filename=/dev/nvme3n1 00:37:14.679 [job5] 00:37:14.679 filename=/dev/nvme4n1 00:37:14.679 [job6] 00:37:14.679 filename=/dev/nvme5n1 00:37:14.679 [job7] 00:37:14.679 filename=/dev/nvme6n1 00:37:14.679 [job8] 00:37:14.679 filename=/dev/nvme7n1 00:37:14.679 [job9] 00:37:14.679 filename=/dev/nvme8n1 00:37:14.679 [job10] 00:37:14.679 filename=/dev/nvme9n1 00:37:14.679 Could not set queue depth (nvme0n1) 00:37:14.679 Could not set queue depth (nvme10n1) 00:37:14.679 Could not set queue depth (nvme1n1) 00:37:14.679 Could not set queue depth (nvme2n1) 00:37:14.679 Could not set queue depth (nvme3n1) 00:37:14.679 Could not set queue depth (nvme4n1) 00:37:14.679 Could not set queue depth (nvme5n1) 00:37:14.679 Could not set queue depth (nvme6n1) 00:37:14.679 Could not set queue depth (nvme7n1) 00:37:14.679 Could not set queue depth (nvme8n1) 00:37:14.679 Could not set queue depth (nvme9n1) 00:37:14.937 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:14.937 fio-3.35 00:37:14.937 Starting 11 threads 00:37:27.132 00:37:27.132 job0: (groupid=0, jobs=1): err= 0: pid=2860038: Mon Jul 22 16:50:44 2024 00:37:27.132 read: IOPS=914, BW=229MiB/s (240MB/s)(2316MiB/10132msec) 00:37:27.132 slat (usec): min=9, max=129535, avg=506.44, stdev=3494.14 00:37:27.132 clat (usec): min=915, max=257381, avg=69415.86, stdev=50444.22 00:37:27.132 lat (usec): min=985, max=295923, avg=69922.31, stdev=50702.08 00:37:27.132 clat percentiles (msec): 00:37:27.132 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 27], 00:37:27.132 | 30.00th=[ 36], 40.00th=[ 50], 50.00th=[ 61], 60.00th=[ 70], 00:37:27.132 | 70.00th=[ 82], 80.00th=[ 106], 90.00th=[ 148], 95.00th=[ 176], 00:37:27.132 | 99.00th=[ 213], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 257], 00:37:27.132 | 99.99th=[ 257] 00:37:27.132 bw ( KiB/s): min=122880, max=348487, per=12.88%, avg=235450.10, stdev=63127.36, samples=20 00:37:27.132 iops : min= 480, max= 1361, avg=919.65, stdev=246.61, samples=20 00:37:27.132 lat (usec) : 1000=0.01% 00:37:27.132 lat (msec) : 2=0.18%, 4=1.09%, 10=4.94%, 20=7.51%, 50=26.79% 00:37:27.132 lat (msec) : 100=37.73%, 250=21.64%, 500=0.10% 00:37:27.132 cpu : usr=0.43%, sys=2.49%, ctx=1957, majf=0, minf=3721 00:37:27.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.132 issued rwts: total=9264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.132 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.132 job1: (groupid=0, jobs=1): err= 0: pid=2860039: Mon Jul 22 16:50:44 2024 00:37:27.132 read: IOPS=523, BW=131MiB/s (137MB/s)(1325MiB/10125msec) 00:37:27.132 slat (usec): min=12, max=149113, avg=1444.23, stdev=5389.55 00:37:27.132 clat (msec): min=2, max=346, avg=120.72, stdev=51.11 00:37:27.132 lat (msec): min=2, max=346, avg=122.16, stdev=51.79 00:37:27.132 clat percentiles (msec): 00:37:27.132 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 78], 00:37:27.132 | 30.00th=[ 102], 40.00th=[ 122], 50.00th=[ 133], 60.00th=[ 144], 00:37:27.132 | 70.00th=[ 150], 80.00th=[ 163], 90.00th=[ 178], 95.00th=[ 188], 00:37:27.132 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 243], 99.95th=[ 279], 00:37:27.132 | 99.99th=[ 347] 00:37:27.132 bw ( KiB/s): min=82778, max=299008, per=7.33%, avg=133971.00, stdev=50052.00, samples=20 00:37:27.132 iops : min= 323, max= 1168, avg=523.25, stdev=195.53, samples=20 00:37:27.132 lat (msec) : 4=0.68%, 10=1.28%, 20=4.28%, 50=7.27%, 100=15.65% 00:37:27.132 lat (msec) : 250=70.78%, 500=0.06% 00:37:27.132 cpu : usr=0.28%, sys=1.39%, ctx=1223, majf=0, minf=4097 00:37:27.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.132 issued rwts: total=5298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.132 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.132 job2: (groupid=0, jobs=1): err= 0: pid=2860040: Mon Jul 22 16:50:44 2024 00:37:27.132 read: IOPS=624, BW=156MiB/s (164MB/s)(1583MiB/10135msec) 00:37:27.132 slat (usec): min=9, max=119597, avg=959.65, stdev=4338.53 00:37:27.132 clat (usec): min=998, max=294291, avg=101386.19, stdev=53320.81 00:37:27.132 lat (usec): min=1022, max=294314, avg=102345.83, stdev=53883.62 00:37:27.132 clat percentiles (msec): 00:37:27.132 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 28], 20.00th=[ 47], 00:37:27.132 | 30.00th=[ 69], 40.00th=[ 86], 50.00th=[ 105], 60.00th=[ 123], 00:37:27.132 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 182], 00:37:27.132 | 99.00th=[ 211], 99.50th=[ 226], 99.90th=[ 239], 99.95th=[ 247], 00:37:27.132 | 99.99th=[ 296] 00:37:27.132 bw ( KiB/s): min=100352, max=315784, per=8.77%, avg=160362.35, stdev=56029.43, samples=20 00:37:27.132 iops : min= 392, max= 1233, avg=626.35, stdev=218.78, samples=20 00:37:27.132 lat (usec) : 1000=0.02% 00:37:27.132 lat (msec) : 2=0.08%, 4=0.74%, 10=4.06%, 20=2.37%, 50=14.34% 00:37:27.132 lat (msec) : 100=26.14%, 250=52.22%, 500=0.03% 00:37:27.132 cpu : usr=0.33%, sys=1.75%, ctx=1523, majf=0, minf=4097 00:37:27.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:37:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.132 issued rwts: total=6331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.132 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.132 job3: (groupid=0, jobs=1): err= 0: pid=2860041: Mon Jul 22 16:50:44 2024 00:37:27.132 read: IOPS=895, BW=224MiB/s (235MB/s)(2260MiB/10094msec) 00:37:27.132 slat (usec): min=9, max=72190, avg=740.64, stdev=3055.69 00:37:27.132 clat (usec): min=835, max=231111, avg=70640.44, stdev=45739.23 00:37:27.132 lat (usec): min=860, max=262799, avg=71381.08, stdev=46153.28 00:37:27.132 clat percentiles (msec): 00:37:27.132 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 31], 00:37:27.132 | 30.00th=[ 36], 40.00th=[ 45], 50.00th=[ 61], 60.00th=[ 77], 00:37:27.132 | 70.00th=[ 95], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 155], 00:37:27.132 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 228], 99.95th=[ 232], 00:37:27.132 | 99.99th=[ 232] 00:37:27.132 bw ( KiB/s): min=111104, max=530906, per=12.58%, avg=230024.16, stdev=102879.38, samples=19 00:37:27.132 iops : min= 434, max= 2073, avg=898.47, stdev=401.74, samples=19 00:37:27.132 lat (usec) : 1000=0.04% 00:37:27.132 lat (msec) : 2=0.38%, 4=1.73%, 10=2.15%, 20=3.96%, 50=35.59% 00:37:27.132 lat (msec) : 100=29.13%, 250=27.02% 00:37:27.132 cpu : usr=0.41%, sys=2.16%, ctx=1876, majf=0, minf=4097 00:37:27.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.132 issued rwts: total=9041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.133 job4: (groupid=0, jobs=1): err= 0: pid=2860043: Mon Jul 22 16:50:44 2024 00:37:27.133 read: IOPS=585, BW=146MiB/s (153MB/s)(1481MiB/10123msec) 00:37:27.133 slat (usec): min=9, max=149695, avg=1083.87, stdev=4372.82 00:37:27.133 clat (usec): min=1012, max=248608, avg=108196.31, stdev=47679.63 00:37:27.133 lat (usec): min=1032, max=248623, avg=109280.18, stdev=48127.45 00:37:27.133 clat percentiles (msec): 00:37:27.133 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 69], 00:37:27.133 | 30.00th=[ 83], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 120], 00:37:27.133 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 171], 95.00th=[ 188], 00:37:27.133 | 99.00th=[ 230], 99.50th=[ 241], 99.90th=[ 249], 99.95th=[ 249], 00:37:27.133 | 99.99th=[ 249] 00:37:27.133 bw ( KiB/s): min=97280, max=228352, per=8.11%, avg=148310.16, stdev=39343.83, samples=19 00:37:27.133 iops : min= 380, max= 892, avg=579.32, stdev=153.70, samples=19 00:37:27.133 lat (msec) : 2=0.41%, 4=0.22%, 10=0.74%, 20=2.11%, 50=7.68% 00:37:27.133 lat (msec) : 100=31.47%, 250=57.37% 00:37:27.133 cpu : usr=0.24%, sys=1.76%, ctx=1306, majf=0, minf=4097 00:37:27.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:37:27.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.133 issued rwts: total=5923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.133 job5: (groupid=0, jobs=1): err= 0: pid=2860044: Mon Jul 22 16:50:44 2024 00:37:27.133 read: IOPS=618, BW=155MiB/s (162MB/s)(1564MiB/10107msec) 00:37:27.133 slat (usec): min=9, max=145764, avg=879.74, stdev=4317.73 00:37:27.133 clat (usec): min=1292, max=313231, avg=102445.53, stdev=55238.95 00:37:27.133 lat (usec): min=1318, max=313249, avg=103325.27, stdev=55913.52 00:37:27.133 clat percentiles (msec): 00:37:27.133 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 34], 20.00th=[ 54], 00:37:27.133 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 99], 60.00th=[ 116], 00:37:27.133 | 70.00th=[ 132], 80.00th=[ 150], 90.00th=[ 176], 95.00th=[ 194], 00:37:27.133 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 271], 99.95th=[ 279], 00:37:27.133 | 99.99th=[ 313] 00:37:27.133 bw ( KiB/s): min=86866, max=252416, per=8.66%, avg=158420.15, stdev=48795.97, samples=20 00:37:27.133 iops : min= 339, max= 986, avg=618.75, stdev=190.61, samples=20 00:37:27.133 lat (msec) : 2=0.19%, 4=0.67%, 10=3.05%, 20=2.49%, 50=12.46% 00:37:27.133 lat (msec) : 100=32.00%, 250=48.16%, 500=0.98% 00:37:27.133 cpu : usr=0.18%, sys=1.65%, ctx=1599, majf=0, minf=4097 00:37:27.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:37:27.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.133 issued rwts: total=6254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.133 job6: (groupid=0, jobs=1): err= 0: pid=2860045: Mon Jul 22 16:50:44 2024 00:37:27.133 read: IOPS=573, BW=143MiB/s (150MB/s)(1442MiB/10061msec) 00:37:27.133 slat (usec): min=9, max=77139, avg=1451.26, stdev=4814.42 00:37:27.133 clat (usec): min=1219, max=259532, avg=110087.03, stdev=41032.89 00:37:27.133 lat (usec): min=1242, max=259563, avg=111538.29, stdev=41715.34 00:37:27.133 clat percentiles (msec): 00:37:27.133 | 1.00th=[ 8], 5.00th=[ 37], 10.00th=[ 62], 20.00th=[ 77], 00:37:27.133 | 30.00th=[ 88], 40.00th=[ 101], 50.00th=[ 110], 60.00th=[ 121], 00:37:27.133 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 163], 95.00th=[ 178], 00:37:27.133 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 220], 99.95th=[ 224], 00:37:27.133 | 99.99th=[ 259] 00:37:27.133 bw ( KiB/s): min=95232, max=263168, per=7.98%, avg=145929.75, stdev=46786.91, samples=20 00:37:27.133 iops : min= 372, max= 1028, avg=569.95, stdev=182.73, samples=20 00:37:27.133 lat (msec) : 2=0.02%, 4=0.49%, 10=0.66%, 20=1.58%, 50=3.80% 00:37:27.133 lat (msec) : 100=33.13%, 250=60.30%, 500=0.03% 00:37:27.133 cpu : usr=0.28%, sys=1.50%, ctx=1184, majf=0, minf=4097 00:37:27.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:37:27.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.133 issued rwts: total=5766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.133 job7: (groupid=0, jobs=1): err= 0: pid=2860048: Mon Jul 22 16:50:44 2024 00:37:27.133 read: IOPS=616, BW=154MiB/s (162MB/s)(1562MiB/10125msec) 00:37:27.133 slat (usec): min=10, max=93802, avg=1181.46, stdev=4382.62 00:37:27.133 clat (msec): min=2, max=273, avg=102.47, stdev=51.10 00:37:27.133 lat (msec): min=2, max=273, avg=103.66, stdev=51.77 00:37:27.133 clat percentiles (msec): 00:37:27.133 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 50], 00:37:27.133 | 30.00th=[ 71], 40.00th=[ 87], 50.00th=[ 106], 60.00th=[ 125], 00:37:27.133 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 180], 00:37:27.133 | 99.00th=[ 201], 99.50th=[ 218], 99.90th=[ 251], 99.95th=[ 251], 00:37:27.133 | 99.99th=[ 275] 00:37:27.133 bw ( KiB/s): min=90624, max=292864, per=8.82%, avg=161175.21, stdev=56156.10, samples=19 00:37:27.133 iops : min= 354, max= 1144, avg=629.53, stdev=219.33, samples=19 00:37:27.133 lat (msec) : 4=0.37%, 10=1.55%, 20=2.11%, 50=16.46%, 100=26.65% 00:37:27.133 lat (msec) : 250=52.75%, 500=0.11% 00:37:27.133 cpu : usr=0.19%, sys=1.79%, ctx=1369, majf=0, minf=4097 00:37:27.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:37:27.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.133 issued rwts: total=6247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.133 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.133 job8: (groupid=0, jobs=1): err= 0: pid=2860075: Mon Jul 22 16:50:44 2024 00:37:27.133 read: IOPS=472, BW=118MiB/s (124MB/s)(1198MiB/10132msec) 00:37:27.133 slat (usec): min=10, max=80789, avg=1440.37, stdev=5358.77 00:37:27.133 clat (usec): min=814, max=259789, avg=133751.28, stdev=38757.62 00:37:27.133 lat (usec): min=836, max=259820, avg=135191.65, stdev=39344.09 00:37:27.133 clat percentiles (msec): 00:37:27.133 | 1.00th=[ 21], 5.00th=[ 67], 10.00th=[ 82], 20.00th=[ 101], 00:37:27.133 | 30.00th=[ 116], 40.00th=[ 131], 50.00th=[ 140], 60.00th=[ 148], 00:37:27.133 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 188], 00:37:27.133 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 239], 99.95th=[ 247], 00:37:27.133 | 99.99th=[ 259] 00:37:27.134 bw ( KiB/s): min=94720, max=173568, per=6.62%, avg=120992.10, stdev=20792.67, samples=20 00:37:27.134 iops : min= 370, max= 678, avg=472.60, stdev=81.24, samples=20 00:37:27.134 lat (usec) : 1000=0.04% 00:37:27.134 lat (msec) : 2=0.06%, 4=0.10%, 10=0.40%, 20=0.31%, 50=1.73% 00:37:27.134 lat (msec) : 100=17.55%, 250=79.77%, 500=0.02% 00:37:27.134 cpu : usr=0.21%, sys=1.27%, ctx=1207, majf=0, minf=4097 00:37:27.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:37:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.134 issued rwts: total=4791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.134 job9: (groupid=0, jobs=1): err= 0: pid=2860091: Mon Jul 22 16:50:44 2024 00:37:27.134 read: IOPS=677, BW=169MiB/s (178MB/s)(1697MiB/10015msec) 00:37:27.134 slat (usec): min=9, max=116633, avg=683.61, stdev=3884.81 00:37:27.134 clat (usec): min=826, max=253472, avg=93659.68, stdev=59951.53 00:37:27.134 lat (usec): min=849, max=265348, avg=94343.29, stdev=60422.23 00:37:27.134 clat percentiles (usec): 00:37:27.134 | 1.00th=[ 1811], 5.00th=[ 7242], 10.00th=[ 14484], 20.00th=[ 28967], 00:37:27.134 | 30.00th=[ 46400], 40.00th=[ 72877], 50.00th=[ 94897], 60.00th=[115868], 00:37:27.134 | 70.00th=[133694], 80.00th=[152044], 90.00th=[170918], 95.00th=[193987], 00:37:27.134 | 99.00th=[212861], 99.50th=[221250], 99.90th=[244319], 99.95th=[246416], 00:37:27.134 | 99.99th=[252707] 00:37:27.134 bw ( KiB/s): min=90112, max=328704, per=9.29%, avg=169912.63, stdev=70729.56, samples=19 00:37:27.134 iops : min= 352, max= 1284, avg=663.68, stdev=276.22, samples=19 00:37:27.134 lat (usec) : 1000=0.13% 00:37:27.134 lat (msec) : 2=1.10%, 4=0.97%, 10=5.11%, 20=6.35%, 50=18.19% 00:37:27.134 lat (msec) : 100=20.42%, 250=47.69%, 500=0.03% 00:37:27.134 cpu : usr=0.36%, sys=1.64%, ctx=1895, majf=0, minf=4097 00:37:27.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.134 issued rwts: total=6788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.134 job10: (groupid=0, jobs=1): err= 0: pid=2860110: Mon Jul 22 16:50:44 2024 00:37:27.134 read: IOPS=659, BW=165MiB/s (173MB/s)(1670MiB/10129msec) 00:37:27.134 slat (usec): min=9, max=107792, avg=773.44, stdev=3865.75 00:37:27.134 clat (usec): min=818, max=261259, avg=96170.32, stdev=57216.97 00:37:27.134 lat (usec): min=842, max=264214, avg=96943.76, stdev=57708.50 00:37:27.134 clat percentiles (msec): 00:37:27.134 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 35], 00:37:27.134 | 30.00th=[ 48], 40.00th=[ 74], 50.00th=[ 101], 60.00th=[ 118], 00:37:27.134 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 171], 95.00th=[ 186], 00:37:27.134 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 259], 99.95th=[ 262], 00:37:27.134 | 99.99th=[ 262] 00:37:27.134 bw ( KiB/s): min=92160, max=345600, per=9.26%, avg=169311.05, stdev=64633.82, samples=20 00:37:27.134 iops : min= 360, max= 1350, avg=661.25, stdev=252.50, samples=20 00:37:27.134 lat (usec) : 1000=0.07% 00:37:27.134 lat (msec) : 2=0.15%, 4=0.55%, 10=2.84%, 20=4.90%, 50=22.66% 00:37:27.134 lat (msec) : 100=18.85%, 250=49.70%, 500=0.27% 00:37:27.134 cpu : usr=0.24%, sys=1.65%, ctx=1721, majf=0, minf=4097 00:37:27.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:27.134 issued rwts: total=6680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.134 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:27.134 00:37:27.134 Run status group 0 (all jobs): 00:37:27.134 READ: bw=1785MiB/s (1872MB/s), 118MiB/s-229MiB/s (124MB/s-240MB/s), io=17.7GiB (19.0GB), run=10015-10135msec 00:37:27.134 00:37:27.134 Disk stats (read/write): 00:37:27.134 nvme0n1: ios=18276/0, merge=0/0, ticks=1242729/0, in_queue=1242729, util=97.10% 00:37:27.134 nvme10n1: ios=10392/0, merge=0/0, ticks=1230055/0, in_queue=1230055, util=97.31% 00:37:27.134 nvme1n1: ios=12485/0, merge=0/0, ticks=1238686/0, in_queue=1238686, util=97.55% 00:37:27.134 nvme2n1: ios=17858/0, merge=0/0, ticks=1241396/0, in_queue=1241396, util=97.70% 00:37:27.134 nvme3n1: ios=11680/0, merge=0/0, ticks=1240483/0, in_queue=1240483, util=97.77% 00:37:27.134 nvme4n1: ios=12323/0, merge=0/0, ticks=1241857/0, in_queue=1241857, util=98.13% 00:37:27.134 nvme5n1: ios=11269/0, merge=0/0, ticks=1234342/0, in_queue=1234342, util=98.29% 00:37:27.134 nvme6n1: ios=12309/0, merge=0/0, ticks=1234804/0, in_queue=1234804, util=98.35% 00:37:27.134 nvme7n1: ios=9363/0, merge=0/0, ticks=1236190/0, in_queue=1236190, util=98.84% 00:37:27.134 nvme8n1: ios=13310/0, merge=0/0, ticks=1243622/0, in_queue=1243622, util=99.04% 00:37:27.134 nvme9n1: ios=13168/0, merge=0/0, ticks=1240583/0, in_queue=1240583, util=99.19% 00:37:27.134 16:50:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:37:27.134 [global] 00:37:27.134 thread=1 00:37:27.134 invalidate=1 00:37:27.134 rw=randwrite 00:37:27.134 time_based=1 00:37:27.134 runtime=10 00:37:27.134 ioengine=libaio 00:37:27.134 direct=1 00:37:27.134 bs=262144 00:37:27.134 iodepth=64 00:37:27.134 norandommap=1 00:37:27.134 numjobs=1 00:37:27.134 00:37:27.134 [job0] 00:37:27.134 filename=/dev/nvme0n1 00:37:27.134 [job1] 00:37:27.134 filename=/dev/nvme10n1 00:37:27.134 [job2] 00:37:27.134 filename=/dev/nvme1n1 00:37:27.134 [job3] 00:37:27.134 filename=/dev/nvme2n1 00:37:27.134 [job4] 00:37:27.134 filename=/dev/nvme3n1 00:37:27.134 [job5] 00:37:27.134 filename=/dev/nvme4n1 00:37:27.134 [job6] 00:37:27.134 filename=/dev/nvme5n1 00:37:27.134 [job7] 00:37:27.134 filename=/dev/nvme6n1 00:37:27.134 [job8] 00:37:27.134 filename=/dev/nvme7n1 00:37:27.134 [job9] 00:37:27.134 filename=/dev/nvme8n1 00:37:27.134 [job10] 00:37:27.134 filename=/dev/nvme9n1 00:37:27.134 Could not set queue depth (nvme0n1) 00:37:27.134 Could not set queue depth (nvme10n1) 00:37:27.134 Could not set queue depth (nvme1n1) 00:37:27.134 Could not set queue depth (nvme2n1) 00:37:27.134 Could not set queue depth (nvme3n1) 00:37:27.134 Could not set queue depth (nvme4n1) 00:37:27.135 Could not set queue depth (nvme5n1) 00:37:27.135 Could not set queue depth (nvme6n1) 00:37:27.135 Could not set queue depth (nvme7n1) 00:37:27.135 Could not set queue depth (nvme8n1) 00:37:27.135 Could not set queue depth (nvme9n1) 00:37:27.135 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:27.135 fio-3.35 00:37:27.135 Starting 11 threads 00:37:37.097 00:37:37.097 job0: (groupid=0, jobs=1): err= 0: pid=2861218: Mon Jul 22 16:50:55 2024 00:37:37.097 write: IOPS=484, BW=121MiB/s (127MB/s)(1235MiB/10194msec); 0 zone resets 00:37:37.097 slat (usec): min=15, max=89770, avg=1163.26, stdev=4296.69 00:37:37.097 clat (usec): min=817, max=473013, avg=130767.65, stdev=104345.04 00:37:37.097 lat (usec): min=849, max=473052, avg=131930.91, stdev=105465.52 00:37:37.097 clat percentiles (msec): 00:37:37.097 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 30], 00:37:37.097 | 30.00th=[ 48], 40.00th=[ 85], 50.00th=[ 117], 60.00th=[ 159], 00:37:37.097 | 70.00th=[ 182], 80.00th=[ 201], 90.00th=[ 257], 95.00th=[ 355], 00:37:37.097 | 99.00th=[ 447], 99.50th=[ 456], 99.90th=[ 468], 99.95th=[ 472], 00:37:37.097 | 99.99th=[ 472] 00:37:37.097 bw ( KiB/s): min=39936, max=257536, per=9.07%, avg=124866.75, stdev=63898.87, samples=20 00:37:37.097 iops : min= 156, max= 1006, avg=487.75, stdev=249.61, samples=20 00:37:37.097 lat (usec) : 1000=0.08% 00:37:37.097 lat (msec) : 2=0.36%, 4=1.80%, 10=4.70%, 20=7.35%, 50=16.41% 00:37:37.097 lat (msec) : 100=13.44%, 250=45.46%, 500=10.40% 00:37:37.097 cpu : usr=1.33%, sys=1.61%, ctx=3486, majf=0, minf=1 00:37:37.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:37:37.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.097 issued rwts: total=0,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.097 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.097 job1: (groupid=0, jobs=1): err= 0: pid=2861230: Mon Jul 22 16:50:55 2024 00:37:37.097 write: IOPS=499, BW=125MiB/s (131MB/s)(1265MiB/10124msec); 0 zone resets 00:37:37.097 slat (usec): min=26, max=60938, avg=1131.29, stdev=4123.38 00:37:37.097 clat (usec): min=914, max=411315, avg=126887.58, stdev=96214.29 00:37:37.097 lat (usec): min=955, max=427151, avg=128018.87, stdev=97465.18 00:37:37.097 clat percentiles (msec): 00:37:37.097 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 35], 00:37:37.097 | 30.00th=[ 55], 40.00th=[ 81], 50.00th=[ 106], 60.00th=[ 148], 00:37:37.097 | 70.00th=[ 180], 80.00th=[ 201], 90.00th=[ 251], 95.00th=[ 334], 00:37:37.097 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 414], 00:37:37.097 | 99.99th=[ 414] 00:37:37.097 bw ( KiB/s): min=43008, max=267264, per=9.28%, avg=127886.75, stdev=58831.14, samples=20 00:37:37.097 iops : min= 168, max= 1044, avg=499.55, stdev=229.81, samples=20 00:37:37.097 lat (usec) : 1000=0.04% 00:37:37.097 lat (msec) : 2=0.51%, 4=1.42%, 10=3.52%, 20=5.32%, 50=16.92% 00:37:37.097 lat (msec) : 100=20.52%, 250=41.52%, 500=10.22% 00:37:37.097 cpu : usr=1.57%, sys=1.54%, ctx=3585, majf=0, minf=1 00:37:37.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:37.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,5058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job2: (groupid=0, jobs=1): err= 0: pid=2861231: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=552, BW=138MiB/s (145MB/s)(1404MiB/10162msec); 0 zone resets 00:37:37.098 slat (usec): min=22, max=176089, avg=1037.66, stdev=4734.08 00:37:37.098 clat (usec): min=1318, max=637951, avg=114521.12, stdev=93584.27 00:37:37.098 lat (usec): min=1371, max=638008, avg=115558.78, stdev=94724.25 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 42], 00:37:37.098 | 30.00th=[ 48], 40.00th=[ 64], 50.00th=[ 83], 60.00th=[ 116], 00:37:37.098 | 70.00th=[ 169], 80.00th=[ 199], 90.00th=[ 220], 95.00th=[ 249], 00:37:37.098 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 527], 99.95th=[ 634], 00:37:37.098 | 99.99th=[ 642] 00:37:37.098 bw ( KiB/s): min=30720, max=275456, per=10.32%, avg=142176.40, stdev=68012.50, samples=20 00:37:37.098 iops : min= 120, max= 1076, avg=555.30, stdev=265.69, samples=20 00:37:37.098 lat (msec) : 2=0.25%, 4=0.48%, 10=2.67%, 20=4.93%, 50=24.28% 00:37:37.098 lat (msec) : 100=23.82%, 250=38.70%, 500=3.86%, 750=1.00% 00:37:37.098 cpu : usr=1.60%, sys=1.51%, ctx=3844, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,5617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job3: (groupid=0, jobs=1): err= 0: pid=2861232: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=391, BW=97.9MiB/s (103MB/s)(996MiB/10180msec); 0 zone resets 00:37:37.098 slat (usec): min=27, max=160921, avg=1773.64, stdev=6157.24 00:37:37.098 clat (usec): min=1522, max=435302, avg=161541.33, stdev=98447.29 00:37:37.098 lat (usec): min=1570, max=435344, avg=163314.97, stdev=99874.61 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 62], 00:37:37.098 | 30.00th=[ 90], 40.00th=[ 127], 50.00th=[ 155], 60.00th=[ 184], 00:37:37.098 | 70.00th=[ 205], 80.00th=[ 236], 90.00th=[ 305], 95.00th=[ 355], 00:37:37.098 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:37:37.098 | 99.99th=[ 435] 00:37:37.098 bw ( KiB/s): min=43008, max=188928, per=7.29%, avg=100417.90, stdev=45090.06, samples=20 00:37:37.098 iops : min= 168, max= 738, avg=392.25, stdev=176.12, samples=20 00:37:37.098 lat (msec) : 2=0.10%, 4=0.30%, 10=1.41%, 20=2.03%, 50=9.36% 00:37:37.098 lat (msec) : 100=18.52%, 250=50.51%, 500=17.77% 00:37:37.098 cpu : usr=1.21%, sys=1.11%, ctx=2338, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,3985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job4: (groupid=0, jobs=1): err= 0: pid=2861233: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=362, BW=90.6MiB/s (95.0MB/s)(918MiB/10129msec); 0 zone resets 00:37:37.098 slat (usec): min=24, max=94289, avg=2231.96, stdev=5883.88 00:37:37.098 clat (usec): min=1110, max=438656, avg=174216.85, stdev=103089.11 00:37:37.098 lat (usec): min=1157, max=438714, avg=176448.82, stdev=104630.04 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 75], 00:37:37.098 | 30.00th=[ 117], 40.00th=[ 146], 50.00th=[ 186], 60.00th=[ 201], 00:37:37.098 | 70.00th=[ 218], 80.00th=[ 251], 90.00th=[ 330], 95.00th=[ 359], 00:37:37.098 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:37:37.098 | 99.99th=[ 439] 00:37:37.098 bw ( KiB/s): min=40960, max=203264, per=6.71%, avg=92394.15, stdev=41569.82, samples=20 00:37:37.098 iops : min= 160, max= 794, avg=360.85, stdev=162.35, samples=20 00:37:37.098 lat (msec) : 2=0.30%, 4=0.41%, 10=1.96%, 20=4.58%, 50=8.82% 00:37:37.098 lat (msec) : 100=10.05%, 250=53.95%, 500=19.93% 00:37:37.098 cpu : usr=1.27%, sys=1.04%, ctx=1891, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,3672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job5: (groupid=0, jobs=1): err= 0: pid=2861234: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=461, BW=115MiB/s (121MB/s)(1176MiB/10197msec); 0 zone resets 00:37:37.098 slat (usec): min=23, max=130539, avg=1759.51, stdev=5506.93 00:37:37.098 clat (usec): min=903, max=456732, avg=136381.04, stdev=106726.75 00:37:37.098 lat (usec): min=932, max=456783, avg=138140.55, stdev=108107.90 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 49], 00:37:37.098 | 30.00th=[ 52], 40.00th=[ 75], 50.00th=[ 102], 60.00th=[ 125], 00:37:37.098 | 70.00th=[ 180], 80.00th=[ 236], 90.00th=[ 305], 95.00th=[ 359], 00:37:37.098 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 456], 99.95th=[ 456], 00:37:37.098 | 99.99th=[ 456] 00:37:37.098 bw ( KiB/s): min=45056, max=330240, per=8.63%, avg=118847.25, stdev=77967.18, samples=20 00:37:37.098 iops : min= 176, max= 1290, avg=464.20, stdev=304.56, samples=20 00:37:37.098 lat (usec) : 1000=0.09% 00:37:37.098 lat (msec) : 2=0.02%, 4=0.45%, 10=2.44%, 20=2.89%, 50=18.53% 00:37:37.098 lat (msec) : 100=25.29%, 250=33.11%, 500=17.17% 00:37:37.098 cpu : usr=1.42%, sys=1.22%, ctx=2142, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,4705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job6: (groupid=0, jobs=1): err= 0: pid=2861237: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=443, BW=111MiB/s (116MB/s)(1122MiB/10116msec); 0 zone resets 00:37:37.098 slat (usec): min=20, max=69600, avg=1266.83, stdev=4199.45 00:37:37.098 clat (usec): min=1034, max=458036, avg=142826.47, stdev=97492.31 00:37:37.098 lat (usec): min=1069, max=458096, avg=144093.31, stdev=98308.20 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 37], 00:37:37.098 | 30.00th=[ 72], 40.00th=[ 110], 50.00th=[ 146], 60.00th=[ 180], 00:37:37.098 | 70.00th=[ 203], 80.00th=[ 222], 90.00th=[ 271], 95.00th=[ 313], 00:37:37.098 | 99.00th=[ 363], 99.50th=[ 422], 99.90th=[ 451], 99.95th=[ 456], 00:37:37.098 | 99.99th=[ 460] 00:37:37.098 bw ( KiB/s): min=53248, max=230912, per=8.22%, avg=113217.30, stdev=46606.85, samples=20 00:37:37.098 iops : min= 208, max= 902, avg=442.25, stdev=182.06, samples=20 00:37:37.098 lat (msec) : 2=0.53%, 4=1.49%, 10=6.02%, 20=6.60%, 50=9.81% 00:37:37.098 lat (msec) : 100=12.55%, 250=49.40%, 500=13.60% 00:37:37.098 cpu : usr=1.27%, sys=1.53%, ctx=2964, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job7: (groupid=0, jobs=1): err= 0: pid=2861238: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=637, BW=159MiB/s (167MB/s)(1626MiB/10193msec); 0 zone resets 00:37:37.098 slat (usec): min=21, max=135999, avg=827.44, stdev=3426.90 00:37:37.098 clat (usec): min=1005, max=508137, avg=99400.18, stdev=92500.60 00:37:37.098 lat (usec): min=1044, max=508191, avg=100227.62, stdev=93125.53 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 23], 00:37:37.098 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 64], 60.00th=[ 83], 00:37:37.098 | 70.00th=[ 136], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 275], 00:37:37.098 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 451], 99.95th=[ 460], 00:37:37.098 | 99.99th=[ 510] 00:37:37.098 bw ( KiB/s): min=62976, max=344576, per=11.97%, avg=164846.60, stdev=83668.69, samples=20 00:37:37.098 iops : min= 246, max= 1346, avg=643.90, stdev=326.86, samples=20 00:37:37.098 lat (msec) : 2=0.77%, 4=2.35%, 10=6.11%, 20=9.35%, 50=25.15% 00:37:37.098 lat (msec) : 100=20.67%, 250=28.96%, 500=6.60%, 750=0.05% 00:37:37.098 cpu : usr=1.56%, sys=2.05%, ctx=4310, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,6502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job8: (groupid=0, jobs=1): err= 0: pid=2861239: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=494, BW=124MiB/s (130MB/s)(1262MiB/10200msec); 0 zone resets 00:37:37.098 slat (usec): min=21, max=173896, avg=865.91, stdev=5426.11 00:37:37.098 clat (usec): min=1081, max=621080, avg=128406.34, stdev=117929.24 00:37:37.098 lat (usec): min=1198, max=621140, avg=129272.25, stdev=119155.67 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 22], 00:37:37.098 | 30.00th=[ 50], 40.00th=[ 74], 50.00th=[ 99], 60.00th=[ 130], 00:37:37.098 | 70.00th=[ 167], 80.00th=[ 201], 90.00th=[ 288], 95.00th=[ 384], 00:37:37.098 | 99.00th=[ 498], 99.50th=[ 518], 99.90th=[ 523], 99.95th=[ 600], 00:37:37.098 | 99.99th=[ 625] 00:37:37.098 bw ( KiB/s): min=30720, max=223232, per=9.26%, avg=127545.85, stdev=50498.35, samples=20 00:37:37.098 iops : min= 120, max= 872, avg=498.20, stdev=197.23, samples=20 00:37:37.098 lat (msec) : 2=0.53%, 4=2.58%, 10=8.76%, 20=7.17%, 50=11.14% 00:37:37.098 lat (msec) : 100=20.17%, 250=37.41%, 500=11.33%, 750=0.91% 00:37:37.098 cpu : usr=1.42%, sys=1.76%, ctx=4145, majf=0, minf=1 00:37:37.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:37.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.098 issued rwts: total=0,5047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.098 job9: (groupid=0, jobs=1): err= 0: pid=2861242: Mon Jul 22 16:50:55 2024 00:37:37.098 write: IOPS=469, BW=117MiB/s (123MB/s)(1188MiB/10129msec); 0 zone resets 00:37:37.098 slat (usec): min=16, max=177680, avg=1416.73, stdev=5296.94 00:37:37.098 clat (usec): min=1212, max=583011, avg=134951.60, stdev=104939.14 00:37:37.098 lat (usec): min=1284, max=583075, avg=136368.33, stdev=106175.85 00:37:37.098 clat percentiles (msec): 00:37:37.098 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 28], 00:37:37.099 | 30.00th=[ 50], 40.00th=[ 91], 50.00th=[ 133], 60.00th=[ 167], 00:37:37.099 | 70.00th=[ 188], 80.00th=[ 209], 90.00th=[ 251], 95.00th=[ 351], 00:37:37.099 | 99.00th=[ 447], 99.50th=[ 460], 99.90th=[ 481], 99.95th=[ 498], 00:37:37.099 | 99.99th=[ 584] 00:37:37.099 bw ( KiB/s): min=34816, max=395264, per=8.72%, avg=120042.25, stdev=73773.94, samples=20 00:37:37.099 iops : min= 136, max= 1544, avg=468.90, stdev=288.18, samples=20 00:37:37.099 lat (msec) : 2=0.34%, 4=1.94%, 10=5.30%, 20=6.73%, 50=15.74% 00:37:37.099 lat (msec) : 100=12.37%, 250=47.33%, 500=10.23%, 750=0.02% 00:37:37.099 cpu : usr=1.40%, sys=1.32%, ctx=3005, majf=0, minf=1 00:37:37.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:37:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.099 issued rwts: total=0,4752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.099 job10: (groupid=0, jobs=1): err= 0: pid=2861243: Mon Jul 22 16:50:55 2024 00:37:37.099 write: IOPS=600, BW=150MiB/s (157MB/s)(1532MiB/10202msec); 0 zone resets 00:37:37.099 slat (usec): min=19, max=185145, avg=713.75, stdev=4068.30 00:37:37.099 clat (usec): min=903, max=486907, avg=105812.69, stdev=94071.11 00:37:37.099 lat (usec): min=983, max=621050, avg=106526.44, stdev=94889.15 00:37:37.099 clat percentiles (msec): 00:37:37.099 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 24], 00:37:37.099 | 30.00th=[ 34], 40.00th=[ 53], 50.00th=[ 77], 60.00th=[ 110], 00:37:37.099 | 70.00th=[ 146], 80.00th=[ 190], 90.00th=[ 234], 95.00th=[ 288], 00:37:37.099 | 99.00th=[ 401], 99.50th=[ 426], 99.90th=[ 468], 99.95th=[ 477], 00:37:37.099 | 99.99th=[ 489] 00:37:37.099 bw ( KiB/s): min=77824, max=268800, per=11.27%, avg=155200.30, stdev=54402.08, samples=20 00:37:37.099 iops : min= 304, max= 1050, avg=606.20, stdev=212.53, samples=20 00:37:37.099 lat (usec) : 1000=0.03% 00:37:37.099 lat (msec) : 2=0.85%, 4=2.50%, 10=6.19%, 20=6.89%, 50=22.79% 00:37:37.099 lat (msec) : 100=18.61%, 250=35.13%, 500=7.02% 00:37:37.099 cpu : usr=1.65%, sys=2.03%, ctx=4860, majf=0, minf=1 00:37:37.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:37:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:37.099 issued rwts: total=0,6126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:37.099 00:37:37.099 Run status group 0 (all jobs): 00:37:37.099 WRITE: bw=1345MiB/s (1410MB/s), 90.6MiB/s-159MiB/s (95.0MB/s-167MB/s), io=13.4GiB (14.4GB), run=10116-10202msec 00:37:37.099 00:37:37.099 Disk stats (read/write): 00:37:37.099 nvme0n1: ios=49/9847, merge=0/0, ticks=159/1247771, in_queue=1247930, util=97.80% 00:37:37.099 nvme10n1: ios=38/9897, merge=0/0, ticks=39/1222863, in_queue=1222902, util=97.24% 00:37:37.099 nvme1n1: ios=33/11228, merge=0/0, ticks=852/1247201, in_queue=1248053, util=99.81% 00:37:37.099 nvme2n1: ios=43/7951, merge=0/0, ticks=1495/1231487, in_queue=1232982, util=100.00% 00:37:37.099 nvme3n1: ios=20/7130, merge=0/0, ticks=310/1211614, in_queue=1211924, util=98.44% 00:37:37.099 nvme4n1: ios=44/9376, merge=0/0, ticks=2295/1229201, in_queue=1231496, util=100.00% 00:37:37.099 nvme5n1: ios=42/8692, merge=0/0, ticks=2562/1215091, in_queue=1217653, util=99.86% 00:37:37.099 nvme6n1: ios=41/12974, merge=0/0, ticks=746/1246943, in_queue=1247689, util=100.00% 00:37:37.099 nvme7n1: ios=0/10046, merge=0/0, ticks=0/1251658, in_queue=1251658, util=98.69% 00:37:37.099 nvme8n1: ios=0/9282, merge=0/0, ticks=0/1216116, in_queue=1216116, util=98.90% 00:37:37.099 nvme9n1: ios=0/12211, merge=0/0, ticks=0/1253762, in_queue=1253762, util=99.06% 00:37:37.099 16:50:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:37:37.099 16:50:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:37:37.099 16:50:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.099 16:50:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:37.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:37:37.099 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.099 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:37:37.357 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:37:37.357 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.357 16:50:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:37:37.615 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:37:37.615 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.615 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:37:37.873 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:37.873 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:37:38.131 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:37:38.131 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:38.131 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:37:38.389 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:38.389 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:37:38.390 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.390 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:38.390 16:50:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.390 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:38.390 16:50:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:37:38.390 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:38.390 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:38.648 rmmod nvme_tcp 00:37:38.648 rmmod nvme_fabrics 00:37:38.648 rmmod nvme_keyring 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2855792 ']' 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2855792 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 2855792 ']' 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 2855792 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2855792 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2855792' 00:37:38.648 killing process with pid 2855792 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 2855792 00:37:38.648 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 2855792 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:39.214 16:50:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.115 16:51:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:41.115 00:37:41.115 real 1m0.831s 00:37:41.115 user 3m28.977s 00:37:41.115 sys 0m22.533s 00:37:41.115 16:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:41.115 16:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:41.115 ************************************ 00:37:41.115 END TEST nvmf_multiconnection 00:37:41.115 ************************************ 00:37:41.115 16:51:00 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:41.115 16:51:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:41.115 16:51:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:41.115 16:51:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.374 ************************************ 00:37:41.374 START TEST nvmf_initiator_timeout 00:37:41.374 ************************************ 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:41.374 * Looking for test storage... 00:37:41.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.374 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:37:41.375 16:51:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:37:43.903 Found 0000:82:00.0 (0x8086 - 0x159b) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:37:43.903 Found 0000:82:00.1 (0x8086 - 0x159b) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:37:43.903 Found net devices under 0000:82:00.0: cvl_0_0 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:37:43.903 Found net devices under 0000:82:00.1: cvl_0_1 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:43.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:37:43.903 00:37:43.903 --- 10.0.0.2 ping statistics --- 00:37:43.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.903 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:37:43.903 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:37:43.904 00:37:43.904 --- 10.0.0.1 ping statistics --- 00:37:43.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.904 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2864984 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2864984 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 2864984 ']' 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:43.904 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.904 [2024-07-22 16:51:03.484594] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:43.904 [2024-07-22 16:51:03.484675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.904 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.162 [2024-07-22 16:51:03.562416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:44.162 [2024-07-22 16:51:03.649097] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.162 [2024-07-22 16:51:03.649150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.162 [2024-07-22 16:51:03.649163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.162 [2024-07-22 16:51:03.649175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.162 [2024-07-22 16:51:03.649185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.162 [2024-07-22 16:51:03.649234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.162 [2024-07-22 16:51:03.649290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.162 [2024-07-22 16:51:03.649359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:44.162 [2024-07-22 16:51:03.649361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.162 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 Malloc0 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 Delay0 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 [2024-07-22 16:51:03.842358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:44.420 [2024-07-22 16:51:03.870583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.420 16:51:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:44.985 16:51:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:37:44.985 16:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:37:44.985 16:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:44.985 16:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:44.985 16:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2865405 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:37:47.509 16:51:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:37:47.509 [global] 00:37:47.509 thread=1 00:37:47.509 invalidate=1 00:37:47.509 rw=write 00:37:47.509 time_based=1 00:37:47.509 runtime=60 00:37:47.509 ioengine=libaio 00:37:47.509 direct=1 00:37:47.509 bs=4096 00:37:47.509 iodepth=1 00:37:47.509 norandommap=0 00:37:47.509 numjobs=1 00:37:47.509 00:37:47.509 verify_dump=1 00:37:47.509 verify_backlog=512 00:37:47.509 verify_state_save=0 00:37:47.509 do_verify=1 00:37:47.509 verify=crc32c-intel 00:37:47.509 [job0] 00:37:47.509 filename=/dev/nvme0n1 00:37:47.509 Could not set queue depth (nvme0n1) 00:37:47.509 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:47.509 fio-3.35 00:37:47.509 Starting 1 thread 00:37:50.032 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:37:50.032 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.032 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:50.032 true 00:37:50.032 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.032 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:50.033 true 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:50.033 true 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:50.033 true 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.033 16:51:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:53.311 true 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:53.311 true 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:53.311 true 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:53.311 true 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:37:53.311 16:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2865405 00:38:49.511 00:38:49.511 job0: (groupid=0, jobs=1): err= 0: pid=2865477: Mon Jul 22 16:52:06 2024 00:38:49.511 read: IOPS=32, BW=132KiB/s (135kB/s)(7896KiB/60036msec) 00:38:49.511 slat (nsec): min=5004, max=58828, avg=12566.15, stdev=8094.39 00:38:49.511 clat (usec): min=245, max=41160k, avg=30108.04, stdev=926346.21 00:38:49.511 lat (usec): min=251, max=41160k, avg=30120.60, stdev=926346.45 00:38:49.511 clat percentiles (usec): 00:38:49.511 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 00:38:49.512 | 20.00th=[ 293], 30.00th=[ 310], 40.00th=[ 326], 00:38:49.512 | 50.00th=[ 347], 60.00th=[ 371], 70.00th=[ 420], 00:38:49.512 | 80.00th=[ 40633], 90.00th=[ 41157], 95.00th=[ 41157], 00:38:49.512 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 44827], 00:38:49.512 | 99.95th=[17112761], 99.99th=[17112761] 00:38:49.512 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60036msec); 0 zone resets 00:38:49.512 slat (usec): min=6, max=25826, avg=24.59, stdev=570.45 00:38:49.512 clat (usec): min=184, max=1024, avg=249.21, stdev=48.37 00:38:49.512 lat (usec): min=194, max=26108, avg=273.80, stdev=573.39 00:38:49.512 clat percentiles (usec): 00:38:49.512 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:38:49.512 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 243], 00:38:49.512 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[ 347], 00:38:49.512 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 465], 99.95th=[ 478], 00:38:49.512 | 99.99th=[ 1029] 00:38:49.512 bw ( KiB/s): min= 2680, max= 8192, per=100.00%, avg=5461.33, stdev=2756.35, samples=3 00:38:49.512 iops : min= 670, max= 2048, avg=1365.33, stdev=689.09, samples=3 00:38:49.512 lat (usec) : 250=36.13%, 500=51.96%, 750=0.97%, 1000=0.07% 00:38:49.512 lat (msec) : 2=0.07%, 50=10.77%, >=2000=0.02% 00:38:49.512 cpu : usr=0.07%, sys=0.10%, ctx=4027, majf=0, minf=2 00:38:49.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.512 issued rwts: total=1974,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:49.512 00:38:49.512 Run status group 0 (all jobs): 00:38:49.512 READ: bw=132KiB/s (135kB/s), 132KiB/s-132KiB/s (135kB/s-135kB/s), io=7896KiB (8086kB), run=60036-60036msec 00:38:49.512 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60036-60036msec 00:38:49.512 00:38:49.512 Disk stats (read/write): 00:38:49.512 nvme0n1: ios=2022/2048, merge=0/0, ticks=19369/496, in_queue=19865, util=99.65% 00:38:49.512 16:52:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:49.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:38:49.512 nvmf hotplug test: fio successful as expected 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:49.512 rmmod nvme_tcp 00:38:49.512 rmmod nvme_fabrics 00:38:49.512 rmmod nvme_keyring 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2864984 ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 2864984 ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2864984' 00:38:49.512 killing process with pid 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 2864984 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:49.512 16:52:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.078 16:52:09 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:50.078 00:38:50.078 real 1m8.657s 00:38:50.078 user 4m11.993s 00:38:50.078 sys 0m6.159s 00:38:50.078 16:52:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:50.078 16:52:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:50.078 ************************************ 00:38:50.078 END TEST nvmf_initiator_timeout 00:38:50.078 ************************************ 00:38:50.078 16:52:09 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:38:50.078 16:52:09 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:38:50.078 16:52:09 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:38:50.078 16:52:09 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:38:50.078 16:52:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:38:52.607 Found 0000:82:00.0 (0x8086 - 0x159b) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:38:52.607 Found 0000:82:00.1 (0x8086 - 0x159b) 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:52.607 16:52:11 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:38:52.608 Found net devices under 0000:82:00.0: cvl_0_0 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:38:52.608 Found net devices under 0000:82:00.1: cvl_0_1 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:38:52.608 16:52:11 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:38:52.608 16:52:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:52.608 16:52:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:52.608 16:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:52.608 ************************************ 00:38:52.608 START TEST nvmf_perf_adq 00:38:52.608 ************************************ 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:38:52.608 * Looking for test storage... 00:38:52.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:38:52.608 16:52:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:38:55.144 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:38:55.145 Found 0000:82:00.0 (0x8086 - 0x159b) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:38:55.145 Found 0000:82:00.1 (0x8086 - 0x159b) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:38:55.145 Found net devices under 0000:82:00.0: cvl_0_0 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:38:55.145 Found net devices under 0000:82:00.1: cvl_0_1 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:38:55.145 16:52:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:38:55.795 16:52:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:38:57.722 16:52:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:02.992 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:39:02.993 Found 0000:82:00.0 (0x8086 - 0x159b) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:39:02.993 Found 0000:82:00.1 (0x8086 - 0x159b) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:39:02.993 Found net devices under 0000:82:00.0: cvl_0_0 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:39:02.993 Found net devices under 0000:82:00.1: cvl_0_1 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:02.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:39:02.993 00:39:02.993 --- 10.0.0.2 ping statistics --- 00:39:02.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.993 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:02.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:39:02.993 00:39:02.993 --- 10.0.0.1 ping statistics --- 00:39:02.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.993 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2878301 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2878301 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2878301 ']' 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:02.993 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:02.993 [2024-07-22 16:52:22.494114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:02.993 [2024-07-22 16:52:22.494209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.993 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.993 [2024-07-22 16:52:22.567474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:03.251 [2024-07-22 16:52:22.657625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.251 [2024-07-22 16:52:22.657692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.251 [2024-07-22 16:52:22.657707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:03.251 [2024-07-22 16:52:22.657719] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:03.251 [2024-07-22 16:52:22.657728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.251 [2024-07-22 16:52:22.657812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.251 [2024-07-22 16:52:22.657886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:03.252 [2024-07-22 16:52:22.657953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.252 [2024-07-22 16:52:22.657955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.252 [2024-07-22 16:52:22.887873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.252 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.511 Malloc1 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.511 [2024-07-22 16:52:22.941365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2878345 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:39:03.511 16:52:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:03.511 EAL: No free 2048 kB hugepages reported on node 1 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:39:05.410 "tick_rate": 2700000000, 00:39:05.410 "poll_groups": [ 00:39:05.410 { 00:39:05.410 "name": "nvmf_tgt_poll_group_000", 00:39:05.410 "admin_qpairs": 1, 00:39:05.410 "io_qpairs": 1, 00:39:05.410 "current_admin_qpairs": 1, 00:39:05.410 "current_io_qpairs": 1, 00:39:05.410 "pending_bdev_io": 0, 00:39:05.410 "completed_nvme_io": 19899, 00:39:05.410 "transports": [ 00:39:05.410 { 00:39:05.410 "trtype": "TCP" 00:39:05.410 } 00:39:05.410 ] 00:39:05.410 }, 00:39:05.410 { 00:39:05.410 "name": "nvmf_tgt_poll_group_001", 00:39:05.410 "admin_qpairs": 0, 00:39:05.410 "io_qpairs": 1, 00:39:05.410 "current_admin_qpairs": 0, 00:39:05.410 "current_io_qpairs": 1, 00:39:05.410 "pending_bdev_io": 0, 00:39:05.410 "completed_nvme_io": 19665, 00:39:05.410 "transports": [ 00:39:05.410 { 00:39:05.410 "trtype": "TCP" 00:39:05.410 } 00:39:05.410 ] 00:39:05.410 }, 00:39:05.410 { 00:39:05.410 "name": "nvmf_tgt_poll_group_002", 00:39:05.410 "admin_qpairs": 0, 00:39:05.410 "io_qpairs": 1, 00:39:05.410 "current_admin_qpairs": 0, 00:39:05.410 "current_io_qpairs": 1, 00:39:05.410 "pending_bdev_io": 0, 00:39:05.410 "completed_nvme_io": 19830, 00:39:05.410 "transports": [ 00:39:05.410 { 00:39:05.410 "trtype": "TCP" 00:39:05.410 } 00:39:05.410 ] 00:39:05.410 }, 00:39:05.410 { 00:39:05.410 "name": "nvmf_tgt_poll_group_003", 00:39:05.410 "admin_qpairs": 0, 00:39:05.410 "io_qpairs": 1, 00:39:05.410 "current_admin_qpairs": 0, 00:39:05.410 "current_io_qpairs": 1, 00:39:05.410 "pending_bdev_io": 0, 00:39:05.410 "completed_nvme_io": 19501, 00:39:05.410 "transports": [ 00:39:05.410 { 00:39:05.410 "trtype": "TCP" 00:39:05.410 } 00:39:05.410 ] 00:39:05.410 } 00:39:05.410 ] 00:39:05.410 }' 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:39:05.410 16:52:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:39:05.410 16:52:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:39:05.410 16:52:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:39:05.410 16:52:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2878345 00:39:13.551 Initializing NVMe Controllers 00:39:13.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:13.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:39:13.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:39:13.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:39:13.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:39:13.551 Initialization complete. Launching workers. 00:39:13.551 ======================================================== 00:39:13.551 Latency(us) 00:39:13.551 Device Information : IOPS MiB/s Average min max 00:39:13.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10055.00 39.28 6367.08 2664.20 9910.83 00:39:13.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10173.00 39.74 6291.07 2295.03 9225.66 00:39:13.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10245.70 40.02 6248.90 1391.16 9754.14 00:39:13.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10182.60 39.78 6285.57 2165.18 9544.60 00:39:13.551 ======================================================== 00:39:13.551 Total : 40656.28 158.81 6297.86 1391.16 9910.83 00:39:13.551 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:13.551 rmmod nvme_tcp 00:39:13.551 rmmod nvme_fabrics 00:39:13.551 rmmod nvme_keyring 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2878301 ']' 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2878301 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2878301 ']' 00:39:13.551 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2878301 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2878301 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2878301' 00:39:13.552 killing process with pid 2878301 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2878301 00:39:13.552 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2878301 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:13.810 16:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.340 16:52:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:16.340 16:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:39:16.340 16:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:39:16.598 16:52:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:39:19.129 16:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:39:24.398 Found 0000:82:00.0 (0x8086 - 0x159b) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:39:24.398 Found 0000:82:00.1 (0x8086 - 0x159b) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:39:24.398 Found net devices under 0000:82:00.0: cvl_0_0 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.398 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:39:24.399 Found net devices under 0000:82:00.1: cvl_0_1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:24.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:39:24.399 00:39:24.399 --- 10.0.0.2 ping statistics --- 00:39:24.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.399 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:39:24.399 00:39:24.399 --- 10.0.0.1 ping statistics --- 00:39:24.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.399 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:39:24.399 net.core.busy_poll = 1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:39:24.399 net.core.busy_read = 1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2880947 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2880947 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2880947 ']' 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:24.399 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.399 [2024-07-22 16:52:43.548659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:24.399 [2024-07-22 16:52:43.548775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.399 EAL: No free 2048 kB hugepages reported on node 1 00:39:24.399 [2024-07-22 16:52:43.631199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:24.399 [2024-07-22 16:52:43.722479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.399 [2024-07-22 16:52:43.722541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.399 [2024-07-22 16:52:43.722567] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.399 [2024-07-22 16:52:43.722580] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.399 [2024-07-22 16:52:43.722592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.400 [2024-07-22 16:52:43.722680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.400 [2024-07-22 16:52:43.722750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:24.400 [2024-07-22 16:52:43.722839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.400 [2024-07-22 16:52:43.722841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 [2024-07-22 16:52:43.913623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 Malloc1 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.400 [2024-07-22 16:52:43.964483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2881002 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:24.400 16:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:39:24.400 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:39:26.929 "tick_rate": 2700000000, 00:39:26.929 "poll_groups": [ 00:39:26.929 { 00:39:26.929 "name": "nvmf_tgt_poll_group_000", 00:39:26.929 "admin_qpairs": 1, 00:39:26.929 "io_qpairs": 1, 00:39:26.929 "current_admin_qpairs": 1, 00:39:26.929 "current_io_qpairs": 1, 00:39:26.929 "pending_bdev_io": 0, 00:39:26.929 "completed_nvme_io": 24866, 00:39:26.929 "transports": [ 00:39:26.929 { 00:39:26.929 "trtype": "TCP" 00:39:26.929 } 00:39:26.929 ] 00:39:26.929 }, 00:39:26.929 { 00:39:26.929 "name": "nvmf_tgt_poll_group_001", 00:39:26.929 "admin_qpairs": 0, 00:39:26.929 "io_qpairs": 3, 00:39:26.929 "current_admin_qpairs": 0, 00:39:26.929 "current_io_qpairs": 3, 00:39:26.929 "pending_bdev_io": 0, 00:39:26.929 "completed_nvme_io": 26451, 00:39:26.929 "transports": [ 00:39:26.929 { 00:39:26.929 "trtype": "TCP" 00:39:26.929 } 00:39:26.929 ] 00:39:26.929 }, 00:39:26.929 { 00:39:26.929 "name": "nvmf_tgt_poll_group_002", 00:39:26.929 "admin_qpairs": 0, 00:39:26.929 "io_qpairs": 0, 00:39:26.929 "current_admin_qpairs": 0, 00:39:26.929 "current_io_qpairs": 0, 00:39:26.929 "pending_bdev_io": 0, 00:39:26.929 "completed_nvme_io": 0, 00:39:26.929 "transports": [ 00:39:26.929 { 00:39:26.929 "trtype": "TCP" 00:39:26.929 } 00:39:26.929 ] 00:39:26.929 }, 00:39:26.929 { 00:39:26.929 "name": "nvmf_tgt_poll_group_003", 00:39:26.929 "admin_qpairs": 0, 00:39:26.929 "io_qpairs": 0, 00:39:26.929 "current_admin_qpairs": 0, 00:39:26.929 "current_io_qpairs": 0, 00:39:26.929 "pending_bdev_io": 0, 00:39:26.929 "completed_nvme_io": 0, 00:39:26.929 "transports": [ 00:39:26.929 { 00:39:26.929 "trtype": "TCP" 00:39:26.929 } 00:39:26.929 ] 00:39:26.929 } 00:39:26.929 ] 00:39:26.929 }' 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:39:26.929 16:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:39:26.929 16:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:39:26.929 16:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:39:26.929 16:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2881002 00:39:35.040 Initializing NVMe Controllers 00:39:35.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:39:35.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:39:35.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:39:35.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:39:35.040 Initialization complete. Launching workers. 00:39:35.040 ======================================================== 00:39:35.040 Latency(us) 00:39:35.040 Device Information : IOPS MiB/s Average min max 00:39:35.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4841.70 18.91 13250.79 1921.59 60380.29 00:39:35.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4500.40 17.58 14252.82 1831.15 61621.77 00:39:35.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13074.89 51.07 4895.55 1659.98 7123.90 00:39:35.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4650.90 18.17 13805.54 2026.11 61101.43 00:39:35.040 ======================================================== 00:39:35.040 Total : 27067.89 105.73 9476.79 1659.98 61621.77 00:39:35.040 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:35.040 rmmod nvme_tcp 00:39:35.040 rmmod nvme_fabrics 00:39:35.040 rmmod nvme_keyring 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2880947 ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2880947 ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880947' 00:39:35.040 killing process with pid 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2880947 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:35.040 16:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.943 16:52:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:36.943 16:52:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:36.943 00:39:36.943 real 0m44.500s 00:39:36.943 user 2m39.813s 00:39:36.943 sys 0m10.135s 00:39:36.943 16:52:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:36.943 16:52:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:36.943 ************************************ 00:39:36.943 END TEST nvmf_perf_adq 00:39:36.943 ************************************ 00:39:36.943 16:52:56 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:39:36.943 16:52:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:36.943 16:52:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:36.943 16:52:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.943 ************************************ 00:39:36.943 START TEST nvmf_shutdown 00:39:36.943 ************************************ 00:39:36.943 16:52:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:39:37.201 * Looking for test storage... 00:39:37.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.201 16:52:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.201 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:39:37.201 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.201 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:37.202 ************************************ 00:39:37.202 START TEST nvmf_shutdown_tc1 00:39:37.202 ************************************ 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:39:37.202 16:52:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:39:39.733 Found 0000:82:00.0 (0x8086 - 0x159b) 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:39.733 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:39:39.734 Found 0000:82:00.1 (0x8086 - 0x159b) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:39:39.734 Found net devices under 0000:82:00.0: cvl_0_0 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:39:39.734 Found net devices under 0000:82:00.1: cvl_0_1 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:39.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:39:39.734 00:39:39.734 --- 10.0.0.2 ping statistics --- 00:39:39.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.734 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:39:39.734 00:39:39.734 --- 10.0.0.1 ping statistics --- 00:39:39.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.734 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:39.734 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2884542 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2884542 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2884542 ']' 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:39.735 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.735 [2024-07-22 16:52:59.233074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:39.735 [2024-07-22 16:52:59.233147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.735 EAL: No free 2048 kB hugepages reported on node 1 00:39:39.735 [2024-07-22 16:52:59.313221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:39.993 [2024-07-22 16:52:59.411208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.993 [2024-07-22 16:52:59.411282] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.993 [2024-07-22 16:52:59.411299] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.993 [2024-07-22 16:52:59.411313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.993 [2024-07-22 16:52:59.411324] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.993 [2024-07-22 16:52:59.411420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:39.993 [2024-07-22 16:52:59.411528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:39.993 [2024-07-22 16:52:59.411581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.993 [2024-07-22 16:52:59.411578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.993 [2024-07-22 16:52:59.564896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.993 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.994 16:52:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:39.994 Malloc1 00:39:40.252 [2024-07-22 16:52:59.654419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.252 Malloc2 00:39:40.252 Malloc3 00:39:40.252 Malloc4 00:39:40.252 Malloc5 00:39:40.252 Malloc6 00:39:40.512 Malloc7 00:39:40.512 Malloc8 00:39:40.512 Malloc9 00:39:40.512 Malloc10 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2884722 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2884722 /var/tmp/bdevperf.sock 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2884722 ']' 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.512 "trtype": "$TEST_TRANSPORT", 00:39:40.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.512 "adrfam": "ipv4", 00:39:40.512 "trsvcid": "$NVMF_PORT", 00:39:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.512 "hdgst": ${hdgst:-false}, 00:39:40.512 "ddgst": ${ddgst:-false} 00:39:40.512 }, 00:39:40.512 "method": "bdev_nvme_attach_controller" 00:39:40.512 } 00:39:40.512 EOF 00:39:40.512 )") 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:40.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.512 "trtype": "$TEST_TRANSPORT", 00:39:40.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.512 "adrfam": "ipv4", 00:39:40.512 "trsvcid": "$NVMF_PORT", 00:39:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.512 "hdgst": ${hdgst:-false}, 00:39:40.512 "ddgst": ${ddgst:-false} 00:39:40.512 }, 00:39:40.512 "method": "bdev_nvme_attach_controller" 00:39:40.512 } 00:39:40.512 EOF 00:39:40.512 )") 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.512 "trtype": "$TEST_TRANSPORT", 00:39:40.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.512 "adrfam": "ipv4", 00:39:40.512 "trsvcid": "$NVMF_PORT", 00:39:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.512 "hdgst": ${hdgst:-false}, 00:39:40.512 "ddgst": ${ddgst:-false} 00:39:40.512 }, 00:39:40.512 "method": "bdev_nvme_attach_controller" 00:39:40.512 } 00:39:40.512 EOF 00:39:40.512 )") 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.512 "trtype": "$TEST_TRANSPORT", 00:39:40.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.512 "adrfam": "ipv4", 00:39:40.512 "trsvcid": "$NVMF_PORT", 00:39:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.512 "hdgst": ${hdgst:-false}, 00:39:40.512 "ddgst": ${ddgst:-false} 00:39:40.512 }, 00:39:40.512 "method": "bdev_nvme_attach_controller" 00:39:40.512 } 00:39:40.512 EOF 00:39:40.512 )") 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.512 "trtype": "$TEST_TRANSPORT", 00:39:40.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.512 "adrfam": "ipv4", 00:39:40.512 "trsvcid": "$NVMF_PORT", 00:39:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.512 "hdgst": ${hdgst:-false}, 00:39:40.512 "ddgst": ${ddgst:-false} 00:39:40.512 }, 00:39:40.512 "method": "bdev_nvme_attach_controller" 00:39:40.512 } 00:39:40.512 EOF 00:39:40.512 )") 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.512 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.512 { 00:39:40.512 "params": { 00:39:40.512 "name": "Nvme$subsystem", 00:39:40.513 "trtype": "$TEST_TRANSPORT", 00:39:40.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "$NVMF_PORT", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.513 "hdgst": ${hdgst:-false}, 00:39:40.513 "ddgst": ${ddgst:-false} 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 } 00:39:40.513 EOF 00:39:40.513 )") 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.513 { 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme$subsystem", 00:39:40.513 "trtype": "$TEST_TRANSPORT", 00:39:40.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "$NVMF_PORT", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.513 "hdgst": ${hdgst:-false}, 00:39:40.513 "ddgst": ${ddgst:-false} 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 } 00:39:40.513 EOF 00:39:40.513 )") 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.513 { 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme$subsystem", 00:39:40.513 "trtype": "$TEST_TRANSPORT", 00:39:40.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "$NVMF_PORT", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.513 "hdgst": ${hdgst:-false}, 00:39:40.513 "ddgst": ${ddgst:-false} 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 } 00:39:40.513 EOF 00:39:40.513 )") 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.513 { 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme$subsystem", 00:39:40.513 "trtype": "$TEST_TRANSPORT", 00:39:40.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "$NVMF_PORT", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.513 "hdgst": ${hdgst:-false}, 00:39:40.513 "ddgst": ${ddgst:-false} 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 } 00:39:40.513 EOF 00:39:40.513 )") 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:40.513 { 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme$subsystem", 00:39:40.513 "trtype": "$TEST_TRANSPORT", 00:39:40.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "$NVMF_PORT", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.513 "hdgst": ${hdgst:-false}, 00:39:40.513 "ddgst": ${ddgst:-false} 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 } 00:39:40.513 EOF 00:39:40.513 )") 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:39:40.513 16:53:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme1", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.513 "hdgst": false, 00:39:40.513 "ddgst": false 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 },{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme2", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:40.513 "hdgst": false, 00:39:40.513 "ddgst": false 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 },{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme3", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:40.513 "hdgst": false, 00:39:40.513 "ddgst": false 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 },{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme4", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:40.513 "hdgst": false, 00:39:40.513 "ddgst": false 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 },{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme5", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:40.513 "hdgst": false, 00:39:40.513 "ddgst": false 00:39:40.513 }, 00:39:40.513 "method": "bdev_nvme_attach_controller" 00:39:40.513 },{ 00:39:40.513 "params": { 00:39:40.513 "name": "Nvme6", 00:39:40.513 "trtype": "tcp", 00:39:40.513 "traddr": "10.0.0.2", 00:39:40.513 "adrfam": "ipv4", 00:39:40.513 "trsvcid": "4420", 00:39:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:40.513 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:40.514 "hdgst": false, 00:39:40.514 "ddgst": false 00:39:40.514 }, 00:39:40.514 "method": "bdev_nvme_attach_controller" 00:39:40.514 },{ 00:39:40.514 "params": { 00:39:40.514 "name": "Nvme7", 00:39:40.514 "trtype": "tcp", 00:39:40.514 "traddr": "10.0.0.2", 00:39:40.514 "adrfam": "ipv4", 00:39:40.514 "trsvcid": "4420", 00:39:40.514 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:40.514 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:40.514 "hdgst": false, 00:39:40.514 "ddgst": false 00:39:40.514 }, 00:39:40.514 "method": "bdev_nvme_attach_controller" 00:39:40.514 },{ 00:39:40.514 "params": { 00:39:40.514 "name": "Nvme8", 00:39:40.514 "trtype": "tcp", 00:39:40.514 "traddr": "10.0.0.2", 00:39:40.514 "adrfam": "ipv4", 00:39:40.514 "trsvcid": "4420", 00:39:40.514 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:40.514 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:40.514 "hdgst": false, 00:39:40.514 "ddgst": false 00:39:40.514 }, 00:39:40.514 "method": "bdev_nvme_attach_controller" 00:39:40.514 },{ 00:39:40.514 "params": { 00:39:40.514 "name": "Nvme9", 00:39:40.514 "trtype": "tcp", 00:39:40.514 "traddr": "10.0.0.2", 00:39:40.514 "adrfam": "ipv4", 00:39:40.514 "trsvcid": "4420", 00:39:40.514 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:40.514 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:40.514 "hdgst": false, 00:39:40.514 "ddgst": false 00:39:40.514 }, 00:39:40.514 "method": "bdev_nvme_attach_controller" 00:39:40.514 },{ 00:39:40.514 "params": { 00:39:40.514 "name": "Nvme10", 00:39:40.514 "trtype": "tcp", 00:39:40.514 "traddr": "10.0.0.2", 00:39:40.514 "adrfam": "ipv4", 00:39:40.514 "trsvcid": "4420", 00:39:40.514 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:40.514 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:40.514 "hdgst": false, 00:39:40.514 "ddgst": false 00:39:40.514 }, 00:39:40.514 "method": "bdev_nvme_attach_controller" 00:39:40.514 }' 00:39:40.514 [2024-07-22 16:53:00.155277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:40.514 [2024-07-22 16:53:00.155360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:40.773 EAL: No free 2048 kB hugepages reported on node 1 00:39:40.773 [2024-07-22 16:53:00.229299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.773 [2024-07-22 16:53:00.316101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2884722 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:39:42.671 16:53:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:39:43.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2884722 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2884542 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.604 { 00:39:43.604 "params": { 00:39:43.604 "name": "Nvme$subsystem", 00:39:43.604 "trtype": "$TEST_TRANSPORT", 00:39:43.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.604 "adrfam": "ipv4", 00:39:43.604 "trsvcid": "$NVMF_PORT", 00:39:43.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.604 "hdgst": ${hdgst:-false}, 00:39:43.604 "ddgst": ${ddgst:-false} 00:39:43.604 }, 00:39:43.604 "method": "bdev_nvme_attach_controller" 00:39:43.604 } 00:39:43.604 EOF 00:39:43.604 )") 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.604 { 00:39:43.604 "params": { 00:39:43.604 "name": "Nvme$subsystem", 00:39:43.604 "trtype": "$TEST_TRANSPORT", 00:39:43.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.604 "adrfam": "ipv4", 00:39:43.604 "trsvcid": "$NVMF_PORT", 00:39:43.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.604 "hdgst": ${hdgst:-false}, 00:39:43.604 "ddgst": ${ddgst:-false} 00:39:43.604 }, 00:39:43.604 "method": "bdev_nvme_attach_controller" 00:39:43.604 } 00:39:43.604 EOF 00:39:43.604 )") 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.604 { 00:39:43.604 "params": { 00:39:43.604 "name": "Nvme$subsystem", 00:39:43.604 "trtype": "$TEST_TRANSPORT", 00:39:43.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.604 "adrfam": "ipv4", 00:39:43.604 "trsvcid": "$NVMF_PORT", 00:39:43.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.604 "hdgst": ${hdgst:-false}, 00:39:43.604 "ddgst": ${ddgst:-false} 00:39:43.604 }, 00:39:43.604 "method": "bdev_nvme_attach_controller" 00:39:43.604 } 00:39:43.604 EOF 00:39:43.604 )") 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.604 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.604 { 00:39:43.604 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:43.605 { 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme$subsystem", 00:39:43.605 "trtype": "$TEST_TRANSPORT", 00:39:43.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "$NVMF_PORT", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.605 "hdgst": ${hdgst:-false}, 00:39:43.605 "ddgst": ${ddgst:-false} 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 } 00:39:43.605 EOF 00:39:43.605 )") 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:39:43.605 16:53:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme1", 00:39:43.605 "trtype": "tcp", 00:39:43.605 "traddr": "10.0.0.2", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "4420", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:43.605 "hdgst": false, 00:39:43.605 "ddgst": false 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 },{ 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme2", 00:39:43.605 "trtype": "tcp", 00:39:43.605 "traddr": "10.0.0.2", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "4420", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:43.605 "hdgst": false, 00:39:43.605 "ddgst": false 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 },{ 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme3", 00:39:43.605 "trtype": "tcp", 00:39:43.605 "traddr": "10.0.0.2", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "4420", 00:39:43.605 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:43.605 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:43.605 "hdgst": false, 00:39:43.605 "ddgst": false 00:39:43.605 }, 00:39:43.605 "method": "bdev_nvme_attach_controller" 00:39:43.605 },{ 00:39:43.605 "params": { 00:39:43.605 "name": "Nvme4", 00:39:43.605 "trtype": "tcp", 00:39:43.605 "traddr": "10.0.0.2", 00:39:43.605 "adrfam": "ipv4", 00:39:43.605 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme5", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme6", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme7", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme8", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme9", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 },{ 00:39:43.606 "params": { 00:39:43.606 "name": "Nvme10", 00:39:43.606 "trtype": "tcp", 00:39:43.606 "traddr": "10.0.0.2", 00:39:43.606 "adrfam": "ipv4", 00:39:43.606 "trsvcid": "4420", 00:39:43.606 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:43.606 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:43.606 "hdgst": false, 00:39:43.606 "ddgst": false 00:39:43.606 }, 00:39:43.606 "method": "bdev_nvme_attach_controller" 00:39:43.606 }' 00:39:43.606 [2024-07-22 16:53:03.170105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:43.606 [2024-07-22 16:53:03.170191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885025 ] 00:39:43.606 EAL: No free 2048 kB hugepages reported on node 1 00:39:43.606 [2024-07-22 16:53:03.244519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.864 [2024-07-22 16:53:03.335969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.261 Running I/O for 1 seconds... 00:39:46.633 00:39:46.634 Latency(us) 00:39:46.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme1n1 : 1.12 229.31 14.33 0.00 0.00 276100.93 18641.35 256318.58 00:39:46.634 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme2n1 : 1.12 228.46 14.28 0.00 0.00 272749.99 19515.16 254765.13 00:39:46.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme3n1 : 1.11 234.29 14.64 0.00 0.00 260029.48 5534.15 245444.46 00:39:46.634 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme4n1 : 1.11 235.22 14.70 0.00 0.00 254155.52 9466.31 260978.92 00:39:46.634 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme5n1 : 1.14 224.05 14.00 0.00 0.00 264535.80 20388.98 268746.15 00:39:46.634 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme6n1 : 1.18 216.14 13.51 0.00 0.00 261467.40 20971.52 262532.36 00:39:46.634 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme7n1 : 1.13 226.64 14.16 0.00 0.00 252076.94 20194.80 267192.70 00:39:46.634 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme8n1 : 1.14 230.09 14.38 0.00 0.00 243331.58 1905.40 262532.36 00:39:46.634 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme9n1 : 1.19 215.05 13.44 0.00 0.00 252098.75 18738.44 296708.17 00:39:46.634 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.634 Verification LBA range: start 0x0 length 0x400 00:39:46.634 Nvme10n1 : 1.19 268.40 16.77 0.00 0.00 203349.11 9320.68 276513.37 00:39:46.634 =================================================================================================================== 00:39:46.634 Total : 2307.64 144.23 0.00 0.00 252746.14 1905.40 296708.17 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:46.634 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:46.892 rmmod nvme_tcp 00:39:46.892 rmmod nvme_fabrics 00:39:46.892 rmmod nvme_keyring 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2884542 ']' 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2884542 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 2884542 ']' 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 2884542 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2884542 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2884542' 00:39:46.893 killing process with pid 2884542 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 2884542 00:39:46.893 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 2884542 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:47.459 16:53:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:49.360 00:39:49.360 real 0m12.251s 00:39:49.360 user 0m34.565s 00:39:49.360 sys 0m3.604s 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:49.360 ************************************ 00:39:49.360 END TEST nvmf_shutdown_tc1 00:39:49.360 ************************************ 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:49.360 ************************************ 00:39:49.360 START TEST nvmf_shutdown_tc2 00:39:49.360 ************************************ 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:39:49.360 Found 0000:82:00.0 (0x8086 - 0x159b) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:39:49.360 Found 0000:82:00.1 (0x8086 - 0x159b) 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:49.360 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:39:49.361 Found net devices under 0000:82:00.0: cvl_0_0 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:39:49.361 Found net devices under 0000:82:00.1: cvl_0_1 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:49.361 16:53:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:49.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:49.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:39:49.619 00:39:49.619 --- 10.0.0.2 ping statistics --- 00:39:49.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.619 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:49.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:49.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:39:49.619 00:39:49.619 --- 10.0.0.1 ping statistics --- 00:39:49.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.619 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:49.619 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2885804 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2885804 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2885804 ']' 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:49.620 16:53:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:49.620 [2024-07-22 16:53:09.173980] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:49.620 [2024-07-22 16:53:09.174075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:49.620 EAL: No free 2048 kB hugepages reported on node 1 00:39:49.620 [2024-07-22 16:53:09.257443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:49.877 [2024-07-22 16:53:09.348885] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:49.877 [2024-07-22 16:53:09.348944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:49.877 [2024-07-22 16:53:09.348960] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:49.877 [2024-07-22 16:53:09.348983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:49.877 [2024-07-22 16:53:09.348995] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:49.877 [2024-07-22 16:53:09.349109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:49.877 [2024-07-22 16:53:09.349160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:49.877 [2024-07-22 16:53:09.349208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:39:49.877 [2024-07-22 16:53:09.349211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:50.808 [2024-07-22 16:53:10.123711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.808 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:50.808 Malloc1 00:39:50.808 [2024-07-22 16:53:10.198647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.808 Malloc2 00:39:50.808 Malloc3 00:39:50.808 Malloc4 00:39:50.808 Malloc5 00:39:50.808 Malloc6 00:39:51.067 Malloc7 00:39:51.067 Malloc8 00:39:51.067 Malloc9 00:39:51.067 Malloc10 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2886093 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2886093 /var/tmp/bdevperf.sock 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2886093 ']' 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:39:51.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.067 { 00:39:51.067 "params": { 00:39:51.067 "name": "Nvme$subsystem", 00:39:51.067 "trtype": "$TEST_TRANSPORT", 00:39:51.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.067 "adrfam": "ipv4", 00:39:51.067 "trsvcid": "$NVMF_PORT", 00:39:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.067 "hdgst": ${hdgst:-false}, 00:39:51.067 "ddgst": ${ddgst:-false} 00:39:51.067 }, 00:39:51.067 "method": "bdev_nvme_attach_controller" 00:39:51.067 } 00:39:51.067 EOF 00:39:51.067 )") 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.067 { 00:39:51.067 "params": { 00:39:51.067 "name": "Nvme$subsystem", 00:39:51.067 "trtype": "$TEST_TRANSPORT", 00:39:51.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.067 "adrfam": "ipv4", 00:39:51.067 "trsvcid": "$NVMF_PORT", 00:39:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.067 "hdgst": ${hdgst:-false}, 00:39:51.067 "ddgst": ${ddgst:-false} 00:39:51.067 }, 00:39:51.067 "method": "bdev_nvme_attach_controller" 00:39:51.067 } 00:39:51.067 EOF 00:39:51.067 )") 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.067 { 00:39:51.067 "params": { 00:39:51.067 "name": "Nvme$subsystem", 00:39:51.067 "trtype": "$TEST_TRANSPORT", 00:39:51.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.067 "adrfam": "ipv4", 00:39:51.067 "trsvcid": "$NVMF_PORT", 00:39:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.067 "hdgst": ${hdgst:-false}, 00:39:51.067 "ddgst": ${ddgst:-false} 00:39:51.067 }, 00:39:51.067 "method": "bdev_nvme_attach_controller" 00:39:51.067 } 00:39:51.067 EOF 00:39:51.067 )") 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.067 { 00:39:51.067 "params": { 00:39:51.067 "name": "Nvme$subsystem", 00:39:51.067 "trtype": "$TEST_TRANSPORT", 00:39:51.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.067 "adrfam": "ipv4", 00:39:51.067 "trsvcid": "$NVMF_PORT", 00:39:51.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.067 "hdgst": ${hdgst:-false}, 00:39:51.067 "ddgst": ${ddgst:-false} 00:39:51.067 }, 00:39:51.067 "method": "bdev_nvme_attach_controller" 00:39:51.067 } 00:39:51.067 EOF 00:39:51.067 )") 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.067 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:51.068 { 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme$subsystem", 00:39:51.068 "trtype": "$TEST_TRANSPORT", 00:39:51.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "$NVMF_PORT", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.068 "hdgst": ${hdgst:-false}, 00:39:51.068 "ddgst": ${ddgst:-false} 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 } 00:39:51.068 EOF 00:39:51.068 )") 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:39:51.068 16:53:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme1", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme2", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme3", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme4", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme5", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme6", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme7", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme8", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:51.068 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:51.068 "hdgst": false, 00:39:51.068 "ddgst": false 00:39:51.068 }, 00:39:51.068 "method": "bdev_nvme_attach_controller" 00:39:51.068 },{ 00:39:51.068 "params": { 00:39:51.068 "name": "Nvme9", 00:39:51.068 "trtype": "tcp", 00:39:51.068 "traddr": "10.0.0.2", 00:39:51.068 "adrfam": "ipv4", 00:39:51.068 "trsvcid": "4420", 00:39:51.068 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:51.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:51.069 "hdgst": false, 00:39:51.069 "ddgst": false 00:39:51.069 }, 00:39:51.069 "method": "bdev_nvme_attach_controller" 00:39:51.069 },{ 00:39:51.069 "params": { 00:39:51.069 "name": "Nvme10", 00:39:51.069 "trtype": "tcp", 00:39:51.069 "traddr": "10.0.0.2", 00:39:51.069 "adrfam": "ipv4", 00:39:51.069 "trsvcid": "4420", 00:39:51.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:51.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:51.069 "hdgst": false, 00:39:51.069 "ddgst": false 00:39:51.069 }, 00:39:51.069 "method": "bdev_nvme_attach_controller" 00:39:51.069 }' 00:39:51.069 [2024-07-22 16:53:10.715133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:51.069 [2024-07-22 16:53:10.715211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886093 ] 00:39:51.327 EAL: No free 2048 kB hugepages reported on node 1 00:39:51.327 [2024-07-22 16:53:10.788680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.327 [2024-07-22 16:53:10.876143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.318 Running I/O for 10 seconds... 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:39:53.318 16:53:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:39:53.602 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.859 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2886093 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2886093 ']' 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2886093 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2886093 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2886093' 00:39:54.117 killing process with pid 2886093 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2886093 00:39:54.117 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2886093 00:39:54.117 Received shutdown signal, test time was about 0.950117 seconds 00:39:54.117 00:39:54.117 Latency(us) 00:39:54.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme1n1 : 0.89 215.34 13.46 0.00 0.00 293632.63 31263.10 257872.02 00:39:54.117 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme2n1 : 0.91 210.24 13.14 0.00 0.00 294661.44 19029.71 256318.58 00:39:54.117 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme3n1 : 0.95 269.68 16.85 0.00 0.00 225395.11 22039.51 250104.79 00:39:54.117 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme4n1 : 0.94 272.43 17.03 0.00 0.00 217309.87 17670.45 250104.79 00:39:54.117 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme5n1 : 0.93 207.44 12.96 0.00 0.00 280476.63 21359.88 260978.92 00:39:54.117 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.117 Nvme6n1 : 0.93 206.11 12.88 0.00 0.00 276425.64 23884.23 282727.16 00:39:54.117 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.117 Verification LBA range: start 0x0 length 0x400 00:39:54.118 Nvme7n1 : 0.94 271.24 16.95 0.00 0.00 205653.71 16699.54 262532.36 00:39:54.118 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.118 Verification LBA range: start 0x0 length 0x400 00:39:54.118 Nvme8n1 : 0.91 218.71 13.67 0.00 0.00 246003.22 3713.71 254765.13 00:39:54.118 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.118 Verification LBA range: start 0x0 length 0x400 00:39:54.118 Nvme9n1 : 0.94 205.07 12.82 0.00 0.00 260269.45 21845.33 292047.83 00:39:54.118 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:54.118 Verification LBA range: start 0x0 length 0x400 00:39:54.118 Nvme10n1 : 0.92 208.66 13.04 0.00 0.00 249100.52 22427.88 265639.25 00:39:54.118 =================================================================================================================== 00:39:54.118 Total : 2284.91 142.81 0.00 0.00 251352.78 3713.71 292047.83 00:39:54.375 16:53:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:55.307 rmmod nvme_tcp 00:39:55.307 rmmod nvme_fabrics 00:39:55.307 rmmod nvme_keyring 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2885804 ']' 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2885804 ']' 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2885804' 00:39:55.307 killing process with pid 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2885804 00:39:55.307 16:53:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2885804 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:55.873 16:53:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:58.403 00:39:58.403 real 0m8.544s 00:39:58.403 user 0m27.126s 00:39:58.403 sys 0m1.588s 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:58.403 ************************************ 00:39:58.403 END TEST nvmf_shutdown_tc2 00:39:58.403 ************************************ 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:58.403 ************************************ 00:39:58.403 START TEST nvmf_shutdown_tc3 00:39:58.403 ************************************ 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:58.403 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:39:58.404 Found 0000:82:00.0 (0x8086 - 0x159b) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:39:58.404 Found 0000:82:00.1 (0x8086 - 0x159b) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:39:58.404 Found net devices under 0000:82:00.0: cvl_0_0 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:39:58.404 Found net devices under 0000:82:00.1: cvl_0_1 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:58.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:58.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:39:58.404 00:39:58.404 --- 10.0.0.2 ping statistics --- 00:39:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.404 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:58.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:58.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:39:58.404 00:39:58.404 --- 10.0.0.1 ping statistics --- 00:39:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.404 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:58.404 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2887019 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2887019 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2887019 ']' 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:58.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:58.405 16:53:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.405 [2024-07-22 16:53:17.760021] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:58.405 [2024-07-22 16:53:17.760096] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:58.405 EAL: No free 2048 kB hugepages reported on node 1 00:39:58.405 [2024-07-22 16:53:17.832787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:58.405 [2024-07-22 16:53:17.918874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:58.405 [2024-07-22 16:53:17.918925] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:58.405 [2024-07-22 16:53:17.918961] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:58.405 [2024-07-22 16:53:17.918981] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:58.405 [2024-07-22 16:53:17.918993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:58.405 [2024-07-22 16:53:17.919084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:58.405 [2024-07-22 16:53:17.919115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:58.405 [2024-07-22 16:53:17.919171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:39:58.405 [2024-07-22 16:53:17.919174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:58.405 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:58.405 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:39:58.405 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:58.405 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:58.405 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.663 [2024-07-22 16:53:18.080858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.663 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:58.663 Malloc1 00:39:58.663 [2024-07-22 16:53:18.170088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:58.663 Malloc2 00:39:58.663 Malloc3 00:39:58.663 Malloc4 00:39:58.921 Malloc5 00:39:58.921 Malloc6 00:39:58.921 Malloc7 00:39:58.921 Malloc8 00:39:58.921 Malloc9 00:39:59.179 Malloc10 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2887195 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2887195 /var/tmp/bdevperf.sock 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2887195 ']' 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:59.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:59.179 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.180 "method": "bdev_nvme_attach_controller" 00:39:59.180 } 00:39:59.180 EOF 00:39:59.180 )") 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.180 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.180 { 00:39:59.180 "params": { 00:39:59.180 "name": "Nvme$subsystem", 00:39:59.180 "trtype": "$TEST_TRANSPORT", 00:39:59.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.180 "adrfam": "ipv4", 00:39:59.180 "trsvcid": "$NVMF_PORT", 00:39:59.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.180 "hdgst": ${hdgst:-false}, 00:39:59.180 "ddgst": ${ddgst:-false} 00:39:59.180 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 } 00:39:59.181 EOF 00:39:59.181 )") 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.181 { 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme$subsystem", 00:39:59.181 "trtype": "$TEST_TRANSPORT", 00:39:59.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "$NVMF_PORT", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.181 "hdgst": ${hdgst:-false}, 00:39:59.181 "ddgst": ${ddgst:-false} 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 } 00:39:59.181 EOF 00:39:59.181 )") 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:59.181 { 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme$subsystem", 00:39:59.181 "trtype": "$TEST_TRANSPORT", 00:39:59.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "$NVMF_PORT", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:59.181 "hdgst": ${hdgst:-false}, 00:39:59.181 "ddgst": ${ddgst:-false} 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 } 00:39:59.181 EOF 00:39:59.181 )") 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:39:59.181 16:53:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme1", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme2", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme3", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme4", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme5", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme6", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme7", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme8", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme9", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 },{ 00:39:59.181 "params": { 00:39:59.181 "name": "Nvme10", 00:39:59.181 "trtype": "tcp", 00:39:59.181 "traddr": "10.0.0.2", 00:39:59.181 "adrfam": "ipv4", 00:39:59.181 "trsvcid": "4420", 00:39:59.181 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:59.181 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:59.181 "hdgst": false, 00:39:59.181 "ddgst": false 00:39:59.181 }, 00:39:59.181 "method": "bdev_nvme_attach_controller" 00:39:59.181 }' 00:39:59.181 [2024-07-22 16:53:18.684844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:59.181 [2024-07-22 16:53:18.684933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887195 ] 00:39:59.181 EAL: No free 2048 kB hugepages reported on node 1 00:39:59.181 [2024-07-22 16:53:18.758333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.439 [2024-07-22 16:53:18.845845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.811 Running I/O for 10 seconds... 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:40:01.377 16:53:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2887019 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 2887019 ']' 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 2887019 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887019 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887019' 00:40:01.650 killing process with pid 2887019 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 2887019 00:40:01.650 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 2887019 00:40:01.651 [2024-07-22 16:53:21.147195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.147995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.148192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323a90 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.651 [2024-07-22 16:53:21.151503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.151993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.152094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2323f30 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.153991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.652 [2024-07-22 16:53:21.154196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.154586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23243d0 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.653 [2024-07-22 16:53:21.156625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.156821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324890 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.158996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.159381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eac0 is same with the state(5) to be set 00:40:01.654 [2024-07-22 16:53:21.160724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.160990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.161470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ef60 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.162506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.162532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.162546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.162558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.655 [2024-07-22 16:53:21.162570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.162998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.163329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f420 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.656 [2024-07-22 16:53:21.164376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.164856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f8c0 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.175495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f4990 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.175774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf610 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.175943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.175975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.175991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8700 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.176125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eef50 is same with the state(5) to be set 00:40:01.657 [2024-07-22 16:53:21.176297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.657 [2024-07-22 16:53:21.176351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.657 [2024-07-22 16:53:21.176364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d0a60 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.176461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9400 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.176636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90df0 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.176812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.176916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.176929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ffd0 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.176983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477580 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.177153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:01.658 [2024-07-22 16:53:21.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.177269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cff0 is same with the state(5) to be set 00:40:01.658 [2024-07-22 16:53:21.178083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.658 [2024-07-22 16:53:21.178540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.658 [2024-07-22 16:53:21.178555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.178984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.178999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.659 [2024-07-22 16:53:21.179691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.659 [2024-07-22 16:53:21.179705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.179971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.179989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:01.660 [2024-07-22 16:53:21.180228] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1447980 was disconnected and freed. reset controller. 00:40:01.660 [2024-07-22 16:53:21.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.180962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.180996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.660 [2024-07-22 16:53:21.181210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.660 [2024-07-22 16:53:21.181224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.181969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.181987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.661 [2024-07-22 16:53:21.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.661 [2024-07-22 16:53:21.182374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.182561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.182575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137d1c0 is same with the state(5) to be set 00:40:01.662 [2024-07-22 16:53:21.183627] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137d1c0 was disconnected and freed. reset controller. 00:40:01.662 [2024-07-22 16:53:21.183776] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.186522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:40:01.662 [2024-07-22 16:53:21.186569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eef50 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4990 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf610 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8700 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d0a60 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9400 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90df0 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146ffd0 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1477580 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.186824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146cff0 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.188608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:40:01.662 [2024-07-22 16:53:21.188977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.662 [2024-07-22 16:53:21.189011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12eef50 with addr=10.0.0.2, port=4420 00:40:01.662 [2024-07-22 16:53:21.189029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eef50 is same with the state(5) to be set 00:40:01.662 [2024-07-22 16:53:21.189119] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189206] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189285] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189375] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189448] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189520] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189592] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:01.662 [2024-07-22 16:53:21.189724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.662 [2024-07-22 16:53:21.189752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1477580 with addr=10.0.0.2, port=4420 00:40:01.662 [2024-07-22 16:53:21.189768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477580 is same with the state(5) to be set 00:40:01.662 [2024-07-22 16:53:21.189788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eef50 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.189921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1477580 (9): Bad file descriptor 00:40:01.662 [2024-07-22 16:53:21.189947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:40:01.662 [2024-07-22 16:53:21.189961] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:40:01.662 [2024-07-22 16:53:21.189987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:40:01.662 [2024-07-22 16:53:21.190054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.662 [2024-07-22 16:53:21.190074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:40:01.662 [2024-07-22 16:53:21.190086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:40:01.662 [2024-07-22 16:53:21.190099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:40:01.662 [2024-07-22 16:53:21.190164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.662 [2024-07-22 16:53:21.196781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.196834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.196863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.196879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.196896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.196910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.196927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.196941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.196956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.196979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.662 [2024-07-22 16:53:21.197306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.662 [2024-07-22 16:53:21.197320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.197960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.197984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.663 [2024-07-22 16:53:21.198480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.663 [2024-07-22 16:53:21.198493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.198763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.198779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe93ff0 is same with the state(5) to be set 00:40:01.664 [2024-07-22 16:53:21.200080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.664 [2024-07-22 16:53:21.200691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.664 [2024-07-22 16:53:21.200705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.200978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.200994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.665 [2024-07-22 16:53:21.201687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.665 [2024-07-22 16:53:21.201701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.201979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.201996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.202010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.202025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe951a0 is same with the state(5) to be set 00:40:01.666 [2024-07-22 16:53:21.203268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.203999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.204014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.204028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.666 [2024-07-22 16:53:21.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.666 [2024-07-22 16:53:21.204072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.204972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.204989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.667 [2024-07-22 16:53:21.205160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.667 [2024-07-22 16:53:21.205176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.205190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.205205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ab50 is same with the state(5) to be set 00:40:01.668 [2024-07-22 16:53:21.206458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.206978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.206996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.668 [2024-07-22 16:53:21.207556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.668 [2024-07-22 16:53:21.207571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.207962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.207985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.208318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.208333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14464a0 is same with the state(5) to be set 00:40:01.669 [2024-07-22 16:53:21.209551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.209574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.209594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.209610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.209626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.209640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.209656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.209670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.669 [2024-07-22 16:53:21.209685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.669 [2024-07-22 16:53:21.209699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.209959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.209983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.670 [2024-07-22 16:53:21.210778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.670 [2024-07-22 16:53:21.210793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.210961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.210991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.211529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.211544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448ea0 is same with the state(5) to be set 00:40:01.671 [2024-07-22 16:53:21.212795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.212839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.212871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.212903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.212933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.212971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.671 [2024-07-22 16:53:21.213178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.671 [2024-07-22 16:53:21.213193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.213984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.672 [2024-07-22 16:53:21.214294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.672 [2024-07-22 16:53:21.214309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.214324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a3c0 is same with the state(5) to be set 00:40:01.673 [2024-07-22 16:53:21.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.215969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.673 [2024-07-22 16:53:21.216555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.673 [2024-07-22 16:53:21.216569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.216958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.216984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.674 [2024-07-22 16:53:21.217450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.674 [2024-07-22 16:53:21.217466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.217480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.217496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ba30 is same with the state(5) to be set 00:40:01.675 [2024-07-22 16:53:21.218732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.218992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.675 [2024-07-22 16:53:21.219787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.675 [2024-07-22 16:53:21.219802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.219973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:01.676 [2024-07-22 16:53:21.220708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:01.676 [2024-07-22 16:53:21.220723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4da0 is same with the state(5) to be set 00:40:01.676 [2024-07-22 16:53:21.222288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222342] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222464] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.676 [2024-07-22 16:53:21.222493] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.676 [2024-07-22 16:53:21.222520] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.676 [2024-07-22 16:53:21.222540] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.676 [2024-07-22 16:53:21.222652] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:40:01.676 [2024-07-22 16:53:21.222698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:40:01.676 task offset: 19200 on job bdev=Nvme5n1 fails 00:40:01.676 00:40:01.676 Latency(us) 00:40:01.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.676 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme1n1 ended in about 0.93 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme1n1 : 0.93 138.22 8.64 69.11 0.00 305266.54 21262.79 312242.63 00:40:01.677 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme2n1 ended in about 0.93 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme2n1 : 0.93 137.75 8.61 68.87 0.00 300192.05 24078.41 306028.85 00:40:01.677 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme3n1 ended in about 0.93 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme3n1 : 0.93 137.28 8.58 68.64 0.00 295151.19 27962.03 282727.16 00:40:01.677 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme4n1 ended in about 0.94 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme4n1 : 0.94 145.37 9.09 65.20 0.00 282354.08 25049.32 307582.29 00:40:01.677 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme5n1 ended in about 0.91 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme5n1 : 0.91 140.46 8.78 70.23 0.00 275876.85 6262.33 316902.97 00:40:01.677 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme6n1 ended in about 0.94 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme6n1 : 0.94 136.35 8.52 68.18 0.00 279047.84 22039.51 307582.29 00:40:01.677 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme7n1 ended in about 0.94 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme7n1 : 0.94 152.96 9.56 50.99 0.00 272036.09 22913.33 309135.74 00:40:01.677 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme8n1 ended in about 0.94 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme8n1 : 0.94 135.50 8.47 67.75 0.00 269329.70 19418.07 288940.94 00:40:01.677 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme9n1 ended in about 0.95 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme9n1 : 0.95 135.03 8.44 67.52 0.00 264450.91 23495.87 299815.06 00:40:01.677 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:01.677 Job: Nvme10n1 ended in about 0.91 seconds with error 00:40:01.677 Verification LBA range: start 0x0 length 0x400 00:40:01.677 Nvme10n1 : 0.91 140.26 8.77 70.13 0.00 246888.42 10437.21 335544.32 00:40:01.677 =================================================================================================================== 00:40:01.677 Total : 1399.18 87.45 666.62 0.00 279067.92 6262.33 335544.32 00:40:01.677 [2024-07-22 16:53:21.248142] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:01.677 [2024-07-22 16:53:21.248234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:40:01.677 [2024-07-22 16:53:21.248514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.248550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe90df0 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.248571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90df0 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.248697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.248725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9400 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.248741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9400 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.248899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.248926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d0a60 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.248942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d0a60 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.249086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.249113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8700 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.249130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8700 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.251475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.251506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f4990 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.251524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f4990 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.251679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.251704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf610 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.251730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf610 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.251864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.251891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146ffd0 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.251906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ffd0 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.252071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.677 [2024-07-22 16:53:21.252098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146cff0 with addr=10.0.0.2, port=4420 00:40:01.677 [2024-07-22 16:53:21.252114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146cff0 is same with the state(5) to be set 00:40:01.677 [2024-07-22 16:53:21.252142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90df0 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9400 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d0a60 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8700 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252250] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252284] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252307] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252327] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252360] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252378] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:01.677 [2024-07-22 16:53:21.252756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:40:01.677 [2024-07-22 16:53:21.252785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:40:01.677 [2024-07-22 16:53:21.252842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4990 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf610 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146ffd0 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146cff0 (9): Bad file descriptor 00:40:01.677 [2024-07-22 16:53:21.252920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:01.677 [2024-07-22 16:53:21.252934] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:01.677 [2024-07-22 16:53:21.252951] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:01.677 [2024-07-22 16:53:21.252977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:40:01.677 [2024-07-22 16:53:21.252994] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:40:01.677 [2024-07-22 16:53:21.253007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:40:01.677 [2024-07-22 16:53:21.253023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:40:01.677 [2024-07-22 16:53:21.253037] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:40:01.677 [2024-07-22 16:53:21.253050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:40:01.677 [2024-07-22 16:53:21.253066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:40:01.677 [2024-07-22 16:53:21.253079] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.253092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:40:01.678 [2024-07-22 16:53:21.253191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.678 [2024-07-22 16:53:21.253452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12eef50 with addr=10.0.0.2, port=4420 00:40:01.678 [2024-07-22 16:53:21.253468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eef50 is same with the state(5) to be set 00:40:01.678 [2024-07-22 16:53:21.253632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:01.678 [2024-07-22 16:53:21.253657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1477580 with addr=10.0.0.2, port=4420 00:40:01.678 [2024-07-22 16:53:21.253672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477580 is same with the state(5) to be set 00:40:01.678 [2024-07-22 16:53:21.253691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.253705] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.253718] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:40:01.678 [2024-07-22 16:53:21.253737] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.253751] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.253763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:40:01.678 [2024-07-22 16:53:21.253778] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.253791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.253804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:40:01.678 [2024-07-22 16:53:21.253820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.253832] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.253845] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:40:01.678 [2024-07-22 16:53:21.253890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.253952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eef50 (9): Bad file descriptor 00:40:01.678 [2024-07-22 16:53:21.253978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1477580 (9): Bad file descriptor 00:40:01.678 [2024-07-22 16:53:21.254020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.254039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.254053] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:40:01.678 [2024-07-22 16:53:21.254069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:40:01.678 [2024-07-22 16:53:21.254083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:40:01.678 [2024-07-22 16:53:21.254095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:40:01.678 [2024-07-22 16:53:21.254134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:01.678 [2024-07-22 16:53:21.254151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:02.244 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:40:02.244 16:53:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2887195 00:40:03.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2887195) - No such process 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:03.178 rmmod nvme_tcp 00:40:03.178 rmmod nvme_fabrics 00:40:03.178 rmmod nvme_keyring 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:03.178 16:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:05.711 00:40:05.711 real 0m7.232s 00:40:05.711 user 0m17.230s 00:40:05.711 sys 0m1.437s 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:05.711 ************************************ 00:40:05.711 END TEST nvmf_shutdown_tc3 00:40:05.711 ************************************ 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:40:05.711 00:40:05.711 real 0m28.247s 00:40:05.711 user 1m19.005s 00:40:05.711 sys 0m6.780s 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:05.711 16:53:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:05.711 ************************************ 00:40:05.711 END TEST nvmf_shutdown 00:40:05.711 ************************************ 00:40:05.711 16:53:24 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.711 16:53:24 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.711 16:53:24 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:40:05.711 16:53:24 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:05.711 16:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.711 ************************************ 00:40:05.711 START TEST nvmf_multicontroller 00:40:05.711 ************************************ 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:05.711 * Looking for test storage... 00:40:05.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:05.711 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:40:05.712 16:53:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:40:08.243 Found 0000:82:00.0 (0x8086 - 0x159b) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:40:08.243 Found 0000:82:00.1 (0x8086 - 0x159b) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:40:08.243 Found net devices under 0000:82:00.0: cvl_0_0 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:40:08.243 Found net devices under 0000:82:00.1: cvl_0_1 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:08.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:40:08.243 00:40:08.243 --- 10.0.0.2 ping statistics --- 00:40:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.243 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:40:08.243 00:40:08.243 --- 10.0.0.1 ping statistics --- 00:40:08.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.243 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:40:08.243 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2890001 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2890001 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2890001 ']' 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 [2024-07-22 16:53:27.536230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:08.244 [2024-07-22 16:53:27.536318] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.244 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.244 [2024-07-22 16:53:27.618764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:08.244 [2024-07-22 16:53:27.709974] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.244 [2024-07-22 16:53:27.710029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.244 [2024-07-22 16:53:27.710055] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:08.244 [2024-07-22 16:53:27.710069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:08.244 [2024-07-22 16:53:27.710089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.244 [2024-07-22 16:53:27.710196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:08.244 [2024-07-22 16:53:27.713984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:08.244 [2024-07-22 16:53:27.714049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 [2024-07-22 16:53:27.839446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 Malloc0 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.244 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 [2024-07-22 16:53:27.902111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 [2024-07-22 16:53:27.909995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 Malloc1 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2890028 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:40:08.502 16:53:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2890028 /var/tmp/bdevperf.sock 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2890028 ']' 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:08.503 16:53:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.761 NVMe0n1 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:08.761 1 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.761 request: 00:40:08.761 { 00:40:08.761 "name": "NVMe0", 00:40:08.761 "trtype": "tcp", 00:40:08.761 "traddr": "10.0.0.2", 00:40:08.761 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:40:08.761 "hostaddr": "10.0.0.2", 00:40:08.761 "hostsvcid": "60000", 00:40:08.761 "adrfam": "ipv4", 00:40:08.761 "trsvcid": "4420", 00:40:08.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.761 "method": "bdev_nvme_attach_controller", 00:40:08.761 "req_id": 1 00:40:08.761 } 00:40:08.761 Got JSON-RPC error response 00:40:08.761 response: 00:40:08.761 { 00:40:08.761 "code": -114, 00:40:08.761 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:08.761 } 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.761 request: 00:40:08.761 { 00:40:08.761 "name": "NVMe0", 00:40:08.761 "trtype": "tcp", 00:40:08.761 "traddr": "10.0.0.2", 00:40:08.761 "hostaddr": "10.0.0.2", 00:40:08.761 "hostsvcid": "60000", 00:40:08.761 "adrfam": "ipv4", 00:40:08.761 "trsvcid": "4420", 00:40:08.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:08.761 "method": "bdev_nvme_attach_controller", 00:40:08.761 "req_id": 1 00:40:08.761 } 00:40:08.761 Got JSON-RPC error response 00:40:08.761 response: 00:40:08.761 { 00:40:08.761 "code": -114, 00:40:08.761 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:08.761 } 00:40:08.761 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:08.762 request: 00:40:08.762 { 00:40:08.762 "name": "NVMe0", 00:40:08.762 "trtype": "tcp", 00:40:08.762 "traddr": "10.0.0.2", 00:40:08.762 "hostaddr": "10.0.0.2", 00:40:08.762 "hostsvcid": "60000", 00:40:08.762 "adrfam": "ipv4", 00:40:08.762 "trsvcid": "4420", 00:40:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.762 "multipath": "disable", 00:40:08.762 "method": "bdev_nvme_attach_controller", 00:40:08.762 "req_id": 1 00:40:08.762 } 00:40:08.762 Got JSON-RPC error response 00:40:08.762 response: 00:40:08.762 { 00:40:08.762 "code": -114, 00:40:08.762 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:40:08.762 } 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:08.762 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:09.020 request: 00:40:09.020 { 00:40:09.020 "name": "NVMe0", 00:40:09.020 "trtype": "tcp", 00:40:09.020 "traddr": "10.0.0.2", 00:40:09.020 "hostaddr": "10.0.0.2", 00:40:09.020 "hostsvcid": "60000", 00:40:09.020 "adrfam": "ipv4", 00:40:09.020 "trsvcid": "4420", 00:40:09.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:09.020 "multipath": "failover", 00:40:09.020 "method": "bdev_nvme_attach_controller", 00:40:09.020 "req_id": 1 00:40:09.020 } 00:40:09.020 Got JSON-RPC error response 00:40:09.020 response: 00:40:09.020 { 00:40:09.020 "code": -114, 00:40:09.020 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:09.020 } 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:09.020 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.020 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:09.278 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:40:09.278 16:53:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:10.651 0 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2890028 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2890028 ']' 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2890028 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2890028 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2890028' 00:40:10.651 killing process with pid 2890028 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2890028 00:40:10.651 16:53:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2890028 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:40:10.651 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:40:10.651 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:40:10.651 [2024-07-22 16:53:28.009206] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:10.651 [2024-07-22 16:53:28.009332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890028 ] 00:40:10.651 EAL: No free 2048 kB hugepages reported on node 1 00:40:10.651 [2024-07-22 16:53:28.079425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.651 [2024-07-22 16:53:28.165718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.651 [2024-07-22 16:53:28.770548] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name b2686191-ab39-4a1a-8193-94c09e48ba29 already exists 00:40:10.651 [2024-07-22 16:53:28.770592] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:b2686191-ab39-4a1a-8193-94c09e48ba29 alias for bdev NVMe1n1 00:40:10.651 [2024-07-22 16:53:28.770609] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:40:10.651 Running I/O for 1 seconds... 00:40:10.651 00:40:10.651 Latency(us) 00:40:10.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.651 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:40:10.651 NVMe0n1 : 1.01 19817.66 77.41 0.00 0.00 6441.88 2063.17 12281.93 00:40:10.651 =================================================================================================================== 00:40:10.651 Total : 19817.66 77.41 0.00 0.00 6441.88 2063.17 12281.93 00:40:10.651 Received shutdown signal, test time was about 1.000000 seconds 00:40:10.651 00:40:10.651 Latency(us) 00:40:10.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.651 =================================================================================================================== 00:40:10.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:10.651 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:10.652 rmmod nvme_tcp 00:40:10.652 rmmod nvme_fabrics 00:40:10.652 rmmod nvme_keyring 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2890001 ']' 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2890001 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2890001 ']' 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2890001 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:10.652 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2890001 00:40:10.910 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:40:10.910 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:40:10.910 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2890001' 00:40:10.910 killing process with pid 2890001 00:40:10.910 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2890001 00:40:10.910 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2890001 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:11.168 16:53:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.069 16:53:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:13.069 00:40:13.069 real 0m7.777s 00:40:13.069 user 0m11.755s 00:40:13.069 sys 0m2.534s 00:40:13.069 16:53:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:13.069 16:53:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:13.069 ************************************ 00:40:13.069 END TEST nvmf_multicontroller 00:40:13.069 ************************************ 00:40:13.069 16:53:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:13.069 16:53:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:13.069 16:53:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:13.069 16:53:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.069 ************************************ 00:40:13.069 START TEST nvmf_aer 00:40:13.069 ************************************ 00:40:13.069 16:53:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:13.328 * Looking for test storage... 00:40:13.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:40:13.328 16:53:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:40:15.856 Found 0000:82:00.0 (0x8086 - 0x159b) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:40:15.856 Found 0000:82:00.1 (0x8086 - 0x159b) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:40:15.856 Found net devices under 0000:82:00.0: cvl_0_0 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:40:15.856 Found net devices under 0000:82:00.1: cvl_0_1 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:15.856 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:15.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:40:15.857 00:40:15.857 --- 10.0.0.2 ping statistics --- 00:40:15.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.857 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:40:15.857 00:40:15.857 --- 10.0.0.1 ping statistics --- 00:40:15.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.857 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2892645 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2892645 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 2892645 ']' 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:15.857 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:15.857 [2024-07-22 16:53:35.465260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:15.857 [2024-07-22 16:53:35.465346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.857 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.115 [2024-07-22 16:53:35.537852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:16.115 [2024-07-22 16:53:35.622212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.115 [2024-07-22 16:53:35.622259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.115 [2024-07-22 16:53:35.622284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.115 [2024-07-22 16:53:35.622295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.115 [2024-07-22 16:53:35.622306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.115 [2024-07-22 16:53:35.622370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.115 [2024-07-22 16:53:35.622427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:16.115 [2024-07-22 16:53:35.622497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:16.115 [2024-07-22 16:53:35.622500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.115 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:16.115 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:40:16.115 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:16.115 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:16.115 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 [2024-07-22 16:53:35.780792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 Malloc0 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 [2024-07-22 16:53:35.834324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.373 [ 00:40:16.373 { 00:40:16.373 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:16.373 "subtype": "Discovery", 00:40:16.373 "listen_addresses": [], 00:40:16.373 "allow_any_host": true, 00:40:16.373 "hosts": [] 00:40:16.373 }, 00:40:16.373 { 00:40:16.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.373 "subtype": "NVMe", 00:40:16.373 "listen_addresses": [ 00:40:16.373 { 00:40:16.373 "trtype": "TCP", 00:40:16.373 "adrfam": "IPv4", 00:40:16.373 "traddr": "10.0.0.2", 00:40:16.373 "trsvcid": "4420" 00:40:16.373 } 00:40:16.373 ], 00:40:16.373 "allow_any_host": true, 00:40:16.373 "hosts": [], 00:40:16.373 "serial_number": "SPDK00000000000001", 00:40:16.373 "model_number": "SPDK bdev Controller", 00:40:16.373 "max_namespaces": 2, 00:40:16.373 "min_cntlid": 1, 00:40:16.373 "max_cntlid": 65519, 00:40:16.373 "namespaces": [ 00:40:16.373 { 00:40:16.373 "nsid": 1, 00:40:16.373 "bdev_name": "Malloc0", 00:40:16.373 "name": "Malloc0", 00:40:16.373 "nguid": "A257C5B941534B46AFD2BD112761CD1C", 00:40:16.373 "uuid": "a257c5b9-4153-4b46-afd2-bd112761cd1c" 00:40:16.373 } 00:40:16.373 ] 00:40:16.373 } 00:40:16.373 ] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2892670 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:40:16.373 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:40:16.373 16:53:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 3 -lt 200 ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=4 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.631 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 Malloc1 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 Asynchronous Event Request test 00:40:16.889 Attaching to 10.0.0.2 00:40:16.889 Attached to 10.0.0.2 00:40:16.889 Registering asynchronous event callbacks... 00:40:16.889 Starting namespace attribute notice tests for all controllers... 00:40:16.889 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:40:16.889 aer_cb - Changed Namespace 00:40:16.889 Cleaning up... 00:40:16.889 [ 00:40:16.889 { 00:40:16.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:16.889 "subtype": "Discovery", 00:40:16.889 "listen_addresses": [], 00:40:16.889 "allow_any_host": true, 00:40:16.889 "hosts": [] 00:40:16.889 }, 00:40:16.889 { 00:40:16.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.889 "subtype": "NVMe", 00:40:16.889 "listen_addresses": [ 00:40:16.889 { 00:40:16.889 "trtype": "TCP", 00:40:16.889 "adrfam": "IPv4", 00:40:16.889 "traddr": "10.0.0.2", 00:40:16.889 "trsvcid": "4420" 00:40:16.889 } 00:40:16.889 ], 00:40:16.889 "allow_any_host": true, 00:40:16.889 "hosts": [], 00:40:16.889 "serial_number": "SPDK00000000000001", 00:40:16.889 "model_number": "SPDK bdev Controller", 00:40:16.889 "max_namespaces": 2, 00:40:16.889 "min_cntlid": 1, 00:40:16.889 "max_cntlid": 65519, 00:40:16.889 "namespaces": [ 00:40:16.889 { 00:40:16.889 "nsid": 1, 00:40:16.889 "bdev_name": "Malloc0", 00:40:16.889 "name": "Malloc0", 00:40:16.889 "nguid": "A257C5B941534B46AFD2BD112761CD1C", 00:40:16.889 "uuid": "a257c5b9-4153-4b46-afd2-bd112761cd1c" 00:40:16.889 }, 00:40:16.889 { 00:40:16.889 "nsid": 2, 00:40:16.889 "bdev_name": "Malloc1", 00:40:16.889 "name": "Malloc1", 00:40:16.889 "nguid": "2B6769364D6B4F0E8BB5D6F937B6895E", 00:40:16.889 "uuid": "2b676936-4d6b-4f0e-8bb5-d6f937b6895e" 00:40:16.889 } 00:40:16.889 ] 00:40:16.889 } 00:40:16.889 ] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2892670 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:16.889 rmmod nvme_tcp 00:40:16.889 rmmod nvme_fabrics 00:40:16.889 rmmod nvme_keyring 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2892645 ']' 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2892645 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 2892645 ']' 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 2892645 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2892645 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2892645' 00:40:16.889 killing process with pid 2892645 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 2892645 00:40:16.889 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 2892645 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:17.148 16:53:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.677 16:53:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:19.677 00:40:19.677 real 0m6.066s 00:40:19.677 user 0m5.162s 00:40:19.677 sys 0m2.271s 00:40:19.677 16:53:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:19.677 16:53:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:19.677 ************************************ 00:40:19.677 END TEST nvmf_aer 00:40:19.677 ************************************ 00:40:19.677 16:53:38 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:40:19.677 16:53:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:19.677 16:53:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:19.677 16:53:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:19.677 ************************************ 00:40:19.677 START TEST nvmf_async_init 00:40:19.677 ************************************ 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:40:19.677 * Looking for test storage... 00:40:19.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:19.677 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=67582d57f7e8486382f5acf3042d2e3b 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:40:19.678 16:53:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:40:21.579 Found 0000:82:00.0 (0x8086 - 0x159b) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:40:21.579 Found 0000:82:00.1 (0x8086 - 0x159b) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:40:21.579 Found net devices under 0000:82:00.0: cvl_0_0 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:40:21.579 Found net devices under 0000:82:00.1: cvl_0_1 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:21.579 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:21.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:21.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:40:21.838 00:40:21.838 --- 10.0.0.2 ping statistics --- 00:40:21.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:21.838 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:21.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:21.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:40:21.838 00:40:21.838 --- 10.0.0.1 ping statistics --- 00:40:21.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:21.838 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2895021 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2895021 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 2895021 ']' 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:21.838 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:21.838 [2024-07-22 16:53:41.421660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:21.838 [2024-07-22 16:53:41.421724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:21.838 EAL: No free 2048 kB hugepages reported on node 1 00:40:22.097 [2024-07-22 16:53:41.497609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.097 [2024-07-22 16:53:41.586382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.097 [2024-07-22 16:53:41.586444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.097 [2024-07-22 16:53:41.586460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.097 [2024-07-22 16:53:41.586474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.097 [2024-07-22 16:53:41.586486] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.097 [2024-07-22 16:53:41.586524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.097 [2024-07-22 16:53:41.729048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.097 null0 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.097 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 67582d57f7e8486382f5acf3042d2e3b 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.355 [2024-07-22 16:53:41.769230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.355 16:53:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.355 nvme0n1 00:40:22.355 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.355 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:22.355 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.355 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.613 [ 00:40:22.613 { 00:40:22.613 "name": "nvme0n1", 00:40:22.613 "aliases": [ 00:40:22.613 "67582d57-f7e8-4863-82f5-acf3042d2e3b" 00:40:22.613 ], 00:40:22.613 "product_name": "NVMe disk", 00:40:22.613 "block_size": 512, 00:40:22.613 "num_blocks": 2097152, 00:40:22.613 "uuid": "67582d57-f7e8-4863-82f5-acf3042d2e3b", 00:40:22.613 "assigned_rate_limits": { 00:40:22.613 "rw_ios_per_sec": 0, 00:40:22.613 "rw_mbytes_per_sec": 0, 00:40:22.613 "r_mbytes_per_sec": 0, 00:40:22.613 "w_mbytes_per_sec": 0 00:40:22.613 }, 00:40:22.613 "claimed": false, 00:40:22.613 "zoned": false, 00:40:22.613 "supported_io_types": { 00:40:22.613 "read": true, 00:40:22.613 "write": true, 00:40:22.613 "unmap": false, 00:40:22.613 "write_zeroes": true, 00:40:22.613 "flush": true, 00:40:22.613 "reset": true, 00:40:22.613 "compare": true, 00:40:22.613 "compare_and_write": true, 00:40:22.613 "abort": true, 00:40:22.613 "nvme_admin": true, 00:40:22.613 "nvme_io": true 00:40:22.613 }, 00:40:22.613 "memory_domains": [ 00:40:22.613 { 00:40:22.613 "dma_device_id": "system", 00:40:22.613 "dma_device_type": 1 00:40:22.613 } 00:40:22.613 ], 00:40:22.613 "driver_specific": { 00:40:22.613 "nvme": [ 00:40:22.613 { 00:40:22.613 "trid": { 00:40:22.613 "trtype": "TCP", 00:40:22.613 "adrfam": "IPv4", 00:40:22.613 "traddr": "10.0.0.2", 00:40:22.614 "trsvcid": "4420", 00:40:22.614 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:22.614 }, 00:40:22.614 "ctrlr_data": { 00:40:22.614 "cntlid": 1, 00:40:22.614 "vendor_id": "0x8086", 00:40:22.614 "model_number": "SPDK bdev Controller", 00:40:22.614 "serial_number": "00000000000000000000", 00:40:22.614 "firmware_revision": "24.05.1", 00:40:22.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:22.614 "oacs": { 00:40:22.614 "security": 0, 00:40:22.614 "format": 0, 00:40:22.614 "firmware": 0, 00:40:22.614 "ns_manage": 0 00:40:22.614 }, 00:40:22.614 "multi_ctrlr": true, 00:40:22.614 "ana_reporting": false 00:40:22.614 }, 00:40:22.614 "vs": { 00:40:22.614 "nvme_version": "1.3" 00:40:22.614 }, 00:40:22.614 "ns_data": { 00:40:22.614 "id": 1, 00:40:22.614 "can_share": true 00:40:22.614 } 00:40:22.614 } 00:40:22.614 ], 00:40:22.614 "mp_policy": "active_passive" 00:40:22.614 } 00:40:22.614 } 00:40:22.614 ] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 [2024-07-22 16:53:42.021874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:22.614 [2024-07-22 16:53:42.021977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249fb90 (9): Bad file descriptor 00:40:22.614 [2024-07-22 16:53:42.164120] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 [ 00:40:22.614 { 00:40:22.614 "name": "nvme0n1", 00:40:22.614 "aliases": [ 00:40:22.614 "67582d57-f7e8-4863-82f5-acf3042d2e3b" 00:40:22.614 ], 00:40:22.614 "product_name": "NVMe disk", 00:40:22.614 "block_size": 512, 00:40:22.614 "num_blocks": 2097152, 00:40:22.614 "uuid": "67582d57-f7e8-4863-82f5-acf3042d2e3b", 00:40:22.614 "assigned_rate_limits": { 00:40:22.614 "rw_ios_per_sec": 0, 00:40:22.614 "rw_mbytes_per_sec": 0, 00:40:22.614 "r_mbytes_per_sec": 0, 00:40:22.614 "w_mbytes_per_sec": 0 00:40:22.614 }, 00:40:22.614 "claimed": false, 00:40:22.614 "zoned": false, 00:40:22.614 "supported_io_types": { 00:40:22.614 "read": true, 00:40:22.614 "write": true, 00:40:22.614 "unmap": false, 00:40:22.614 "write_zeroes": true, 00:40:22.614 "flush": true, 00:40:22.614 "reset": true, 00:40:22.614 "compare": true, 00:40:22.614 "compare_and_write": true, 00:40:22.614 "abort": true, 00:40:22.614 "nvme_admin": true, 00:40:22.614 "nvme_io": true 00:40:22.614 }, 00:40:22.614 "memory_domains": [ 00:40:22.614 { 00:40:22.614 "dma_device_id": "system", 00:40:22.614 "dma_device_type": 1 00:40:22.614 } 00:40:22.614 ], 00:40:22.614 "driver_specific": { 00:40:22.614 "nvme": [ 00:40:22.614 { 00:40:22.614 "trid": { 00:40:22.614 "trtype": "TCP", 00:40:22.614 "adrfam": "IPv4", 00:40:22.614 "traddr": "10.0.0.2", 00:40:22.614 "trsvcid": "4420", 00:40:22.614 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:22.614 }, 00:40:22.614 "ctrlr_data": { 00:40:22.614 "cntlid": 2, 00:40:22.614 "vendor_id": "0x8086", 00:40:22.614 "model_number": "SPDK bdev Controller", 00:40:22.614 "serial_number": "00000000000000000000", 00:40:22.614 "firmware_revision": "24.05.1", 00:40:22.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:22.614 "oacs": { 00:40:22.614 "security": 0, 00:40:22.614 "format": 0, 00:40:22.614 "firmware": 0, 00:40:22.614 "ns_manage": 0 00:40:22.614 }, 00:40:22.614 "multi_ctrlr": true, 00:40:22.614 "ana_reporting": false 00:40:22.614 }, 00:40:22.614 "vs": { 00:40:22.614 "nvme_version": "1.3" 00:40:22.614 }, 00:40:22.614 "ns_data": { 00:40:22.614 "id": 1, 00:40:22.614 "can_share": true 00:40:22.614 } 00:40:22.614 } 00:40:22.614 ], 00:40:22.614 "mp_policy": "active_passive" 00:40:22.614 } 00:40:22.614 } 00:40:22.614 ] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.k7uHv0RfMa 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.k7uHv0RfMa 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 [2024-07-22 16:53:42.214546] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:22.614 [2024-07-22 16:53:42.214677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7uHv0RfMa 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 [2024-07-22 16:53:42.222568] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7uHv0RfMa 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.614 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.614 [2024-07-22 16:53:42.230581] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:22.614 [2024-07-22 16:53:42.230651] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:40:22.873 nvme0n1 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.873 [ 00:40:22.873 { 00:40:22.873 "name": "nvme0n1", 00:40:22.873 "aliases": [ 00:40:22.873 "67582d57-f7e8-4863-82f5-acf3042d2e3b" 00:40:22.873 ], 00:40:22.873 "product_name": "NVMe disk", 00:40:22.873 "block_size": 512, 00:40:22.873 "num_blocks": 2097152, 00:40:22.873 "uuid": "67582d57-f7e8-4863-82f5-acf3042d2e3b", 00:40:22.873 "assigned_rate_limits": { 00:40:22.873 "rw_ios_per_sec": 0, 00:40:22.873 "rw_mbytes_per_sec": 0, 00:40:22.873 "r_mbytes_per_sec": 0, 00:40:22.873 "w_mbytes_per_sec": 0 00:40:22.873 }, 00:40:22.873 "claimed": false, 00:40:22.873 "zoned": false, 00:40:22.873 "supported_io_types": { 00:40:22.873 "read": true, 00:40:22.873 "write": true, 00:40:22.873 "unmap": false, 00:40:22.873 "write_zeroes": true, 00:40:22.873 "flush": true, 00:40:22.873 "reset": true, 00:40:22.873 "compare": true, 00:40:22.873 "compare_and_write": true, 00:40:22.873 "abort": true, 00:40:22.873 "nvme_admin": true, 00:40:22.873 "nvme_io": true 00:40:22.873 }, 00:40:22.873 "memory_domains": [ 00:40:22.873 { 00:40:22.873 "dma_device_id": "system", 00:40:22.873 "dma_device_type": 1 00:40:22.873 } 00:40:22.873 ], 00:40:22.873 "driver_specific": { 00:40:22.873 "nvme": [ 00:40:22.873 { 00:40:22.873 "trid": { 00:40:22.873 "trtype": "TCP", 00:40:22.873 "adrfam": "IPv4", 00:40:22.873 "traddr": "10.0.0.2", 00:40:22.873 "trsvcid": "4421", 00:40:22.873 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:22.873 }, 00:40:22.873 "ctrlr_data": { 00:40:22.873 "cntlid": 3, 00:40:22.873 "vendor_id": "0x8086", 00:40:22.873 "model_number": "SPDK bdev Controller", 00:40:22.873 "serial_number": "00000000000000000000", 00:40:22.873 "firmware_revision": "24.05.1", 00:40:22.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:22.873 "oacs": { 00:40:22.873 "security": 0, 00:40:22.873 "format": 0, 00:40:22.873 "firmware": 0, 00:40:22.873 "ns_manage": 0 00:40:22.873 }, 00:40:22.873 "multi_ctrlr": true, 00:40:22.873 "ana_reporting": false 00:40:22.873 }, 00:40:22.873 "vs": { 00:40:22.873 "nvme_version": "1.3" 00:40:22.873 }, 00:40:22.873 "ns_data": { 00:40:22.873 "id": 1, 00:40:22.873 "can_share": true 00:40:22.873 } 00:40:22.873 } 00:40:22.873 ], 00:40:22.873 "mp_policy": "active_passive" 00:40:22.873 } 00:40:22.873 } 00:40:22.873 ] 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.k7uHv0RfMa 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:22.873 rmmod nvme_tcp 00:40:22.873 rmmod nvme_fabrics 00:40:22.873 rmmod nvme_keyring 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2895021 ']' 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2895021 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 2895021 ']' 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 2895021 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2895021 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2895021' 00:40:22.873 killing process with pid 2895021 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 2895021 00:40:22.873 [2024-07-22 16:53:42.420237] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:40:22.873 [2024-07-22 16:53:42.420273] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:22.873 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 2895021 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:23.134 16:53:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:25.036 16:53:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:25.036 00:40:25.036 real 0m5.862s 00:40:25.036 user 0m2.147s 00:40:25.036 sys 0m2.096s 00:40:25.036 16:53:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:25.036 16:53:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:40:25.036 ************************************ 00:40:25.036 END TEST nvmf_async_init 00:40:25.036 ************************************ 00:40:25.296 16:53:44 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.296 ************************************ 00:40:25.296 START TEST dma 00:40:25.296 ************************************ 00:40:25.296 16:53:44 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:40:25.296 * Looking for test storage... 00:40:25.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:25.296 16:53:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:25.296 16:53:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:25.296 16:53:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:25.296 16:53:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:25.296 16:53:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.296 16:53:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.296 16:53:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.296 16:53:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:40:25.296 16:53:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:25.296 16:53:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:25.296 16:53:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:40:25.296 16:53:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:40:25.296 00:40:25.296 real 0m0.062s 00:40:25.296 user 0m0.030s 00:40:25.296 sys 0m0.037s 00:40:25.296 16:53:44 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:25.296 16:53:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:40:25.296 ************************************ 00:40:25.296 END TEST dma 00:40:25.296 ************************************ 00:40:25.296 16:53:44 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:25.296 16:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.296 ************************************ 00:40:25.296 START TEST nvmf_identify 00:40:25.296 ************************************ 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:40:25.296 * Looking for test storage... 00:40:25.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:25.296 16:53:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:40:25.297 16:53:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:40:27.827 Found 0000:82:00.0 (0x8086 - 0x159b) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:40:27.827 Found 0000:82:00.1 (0x8086 - 0x159b) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:40:27.827 Found net devices under 0000:82:00.0: cvl_0_0 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:40:27.827 Found net devices under 0000:82:00.1: cvl_0_1 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:27.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:27.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:40:27.827 00:40:27.827 --- 10.0.0.2 ping statistics --- 00:40:27.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.827 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:40:27.827 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:27.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:27.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:40:27.828 00:40:27.828 --- 10.0.0.1 ping statistics --- 00:40:27.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.828 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2897436 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2897436 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 2897436 ']' 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:27.828 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:27.828 [2024-07-22 16:53:47.449030] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:27.828 [2024-07-22 16:53:47.449127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.086 EAL: No free 2048 kB hugepages reported on node 1 00:40:28.086 [2024-07-22 16:53:47.527420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:28.086 [2024-07-22 16:53:47.614816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.086 [2024-07-22 16:53:47.614880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.086 [2024-07-22 16:53:47.614905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.086 [2024-07-22 16:53:47.614917] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.086 [2024-07-22 16:53:47.614927] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.086 [2024-07-22 16:53:47.615020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.086 [2024-07-22 16:53:47.615042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:28.086 [2024-07-22 16:53:47.615111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:28.086 [2024-07-22 16:53:47.615113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.345 [2024-07-22 16:53:47.748777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.345 Malloc0 00:40:28.345 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.346 [2024-07-22 16:53:47.830484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.346 [ 00:40:28.346 { 00:40:28.346 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:28.346 "subtype": "Discovery", 00:40:28.346 "listen_addresses": [ 00:40:28.346 { 00:40:28.346 "trtype": "TCP", 00:40:28.346 "adrfam": "IPv4", 00:40:28.346 "traddr": "10.0.0.2", 00:40:28.346 "trsvcid": "4420" 00:40:28.346 } 00:40:28.346 ], 00:40:28.346 "allow_any_host": true, 00:40:28.346 "hosts": [] 00:40:28.346 }, 00:40:28.346 { 00:40:28.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:28.346 "subtype": "NVMe", 00:40:28.346 "listen_addresses": [ 00:40:28.346 { 00:40:28.346 "trtype": "TCP", 00:40:28.346 "adrfam": "IPv4", 00:40:28.346 "traddr": "10.0.0.2", 00:40:28.346 "trsvcid": "4420" 00:40:28.346 } 00:40:28.346 ], 00:40:28.346 "allow_any_host": true, 00:40:28.346 "hosts": [], 00:40:28.346 "serial_number": "SPDK00000000000001", 00:40:28.346 "model_number": "SPDK bdev Controller", 00:40:28.346 "max_namespaces": 32, 00:40:28.346 "min_cntlid": 1, 00:40:28.346 "max_cntlid": 65519, 00:40:28.346 "namespaces": [ 00:40:28.346 { 00:40:28.346 "nsid": 1, 00:40:28.346 "bdev_name": "Malloc0", 00:40:28.346 "name": "Malloc0", 00:40:28.346 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:40:28.346 "eui64": "ABCDEF0123456789", 00:40:28.346 "uuid": "a839c61b-f9b8-407d-ad2e-126e97872fec" 00:40:28.346 } 00:40:28.346 ] 00:40:28.346 } 00:40:28.346 ] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.346 16:53:47 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:40:28.346 [2024-07-22 16:53:47.870153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:28.346 [2024-07-22 16:53:47.870195] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897471 ] 00:40:28.346 EAL: No free 2048 kB hugepages reported on node 1 00:40:28.346 [2024-07-22 16:53:47.904389] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:40:28.346 [2024-07-22 16:53:47.904447] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:40:28.346 [2024-07-22 16:53:47.904457] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:40:28.346 [2024-07-22 16:53:47.904472] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:40:28.346 [2024-07-22 16:53:47.904489] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:40:28.346 [2024-07-22 16:53:47.908047] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:40:28.346 [2024-07-22 16:53:47.908107] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1804120 0 00:40:28.346 [2024-07-22 16:53:47.915979] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:40:28.346 [2024-07-22 16:53:47.916003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:40:28.346 [2024-07-22 16:53:47.916025] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:40:28.346 [2024-07-22 16:53:47.916030] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:40:28.346 [2024-07-22 16:53:47.916082] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.916095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.916102] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.346 [2024-07-22 16:53:47.916121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:28.346 [2024-07-22 16:53:47.916146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.346 [2024-07-22 16:53:47.923992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.346 [2024-07-22 16:53:47.924010] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.346 [2024-07-22 16:53:47.924018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924026] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.346 [2024-07-22 16:53:47.924048] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:40:28.346 [2024-07-22 16:53:47.924059] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:40:28.346 [2024-07-22 16:53:47.924068] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:40:28.346 [2024-07-22 16:53:47.924093] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924102] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.346 [2024-07-22 16:53:47.924120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.346 [2024-07-22 16:53:47.924144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.346 [2024-07-22 16:53:47.924292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.346 [2024-07-22 16:53:47.924304] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.346 [2024-07-22 16:53:47.924311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.346 [2024-07-22 16:53:47.924332] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:40:28.346 [2024-07-22 16:53:47.924346] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:40:28.346 [2024-07-22 16:53:47.924357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.346 [2024-07-22 16:53:47.924381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.346 [2024-07-22 16:53:47.924401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.346 [2024-07-22 16:53:47.924537] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.346 [2024-07-22 16:53:47.924549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.346 [2024-07-22 16:53:47.924555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924562] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.346 [2024-07-22 16:53:47.924572] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:40:28.346 [2024-07-22 16:53:47.924586] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:40:28.346 [2024-07-22 16:53:47.924597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924604] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.346 [2024-07-22 16:53:47.924620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.346 [2024-07-22 16:53:47.924640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.346 [2024-07-22 16:53:47.924745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.346 [2024-07-22 16:53:47.924759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.346 [2024-07-22 16:53:47.924766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.346 [2024-07-22 16:53:47.924782] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:40:28.346 [2024-07-22 16:53:47.924799] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.346 [2024-07-22 16:53:47.924814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.346 [2024-07-22 16:53:47.924824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.346 [2024-07-22 16:53:47.924844] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.924949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.924960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.924991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.924999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.925009] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:40:28.347 [2024-07-22 16:53:47.925018] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:40:28.347 [2024-07-22 16:53:47.925032] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:40:28.347 [2024-07-22 16:53:47.925142] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:40:28.347 [2024-07-22 16:53:47.925150] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:40:28.347 [2024-07-22 16:53:47.925165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.925193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.347 [2024-07-22 16:53:47.925215] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.925381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.925396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.925402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.925419] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:40:28.347 [2024-07-22 16:53:47.925435] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.925460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.347 [2024-07-22 16:53:47.925480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.925631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.925642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.925649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.925665] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:40:28.347 [2024-07-22 16:53:47.925673] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.925686] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:40:28.347 [2024-07-22 16:53:47.925699] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.925717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.925736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.347 [2024-07-22 16:53:47.925756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.925897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.347 [2024-07-22 16:53:47.925911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.347 [2024-07-22 16:53:47.925918] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925924] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1804120): datao=0, datal=4096, cccid=0 00:40:28.347 [2024-07-22 16:53:47.925931] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x185d1f0) on tqpair(0x1804120): expected_datao=0, payload_size=4096 00:40:28.347 [2024-07-22 16:53:47.925939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925981] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.925994] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926054] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.926073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.926081] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.926106] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:40:28.347 [2024-07-22 16:53:47.926116] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:40:28.347 [2024-07-22 16:53:47.926125] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:40:28.347 [2024-07-22 16:53:47.926133] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:40:28.347 [2024-07-22 16:53:47.926141] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:40:28.347 [2024-07-22 16:53:47.926150] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.926165] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.926178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:28.347 [2024-07-22 16:53:47.926226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.926420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.926433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.926439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d1f0) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.926461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926469] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.347 [2024-07-22 16:53:47.926494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926501] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.347 [2024-07-22 16:53:47.926524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.347 [2024-07-22 16:53:47.926554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.347 [2024-07-22 16:53:47.926589] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.926607] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:40:28.347 [2024-07-22 16:53:47.926619] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926626] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1804120) 00:40:28.347 [2024-07-22 16:53:47.926636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.347 [2024-07-22 16:53:47.926658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d1f0, cid 0, qid 0 00:40:28.347 [2024-07-22 16:53:47.926669] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d350, cid 1, qid 0 00:40:28.347 [2024-07-22 16:53:47.926677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d4b0, cid 2, qid 0 00:40:28.347 [2024-07-22 16:53:47.926698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d610, cid 3, qid 0 00:40:28.347 [2024-07-22 16:53:47.926706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d770, cid 4, qid 0 00:40:28.347 [2024-07-22 16:53:47.926877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.347 [2024-07-22 16:53:47.926889] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.347 [2024-07-22 16:53:47.926895] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.347 [2024-07-22 16:53:47.926902] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d770) on tqpair=0x1804120 00:40:28.347 [2024-07-22 16:53:47.926913] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:40:28.347 [2024-07-22 16:53:47.926922] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:40:28.347 [2024-07-22 16:53:47.926939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.926973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1804120) 00:40:28.348 [2024-07-22 16:53:47.926985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.348 [2024-07-22 16:53:47.927006] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d770, cid 4, qid 0 00:40:28.348 [2024-07-22 16:53:47.927140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.348 [2024-07-22 16:53:47.927155] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.348 [2024-07-22 16:53:47.927161] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.927168] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1804120): datao=0, datal=4096, cccid=4 00:40:28.348 [2024-07-22 16:53:47.927175] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x185d770) on tqpair(0x1804120): expected_datao=0, payload_size=4096 00:40:28.348 [2024-07-22 16:53:47.927182] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.927199] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.927208] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.971977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.348 [2024-07-22 16:53:47.971995] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.348 [2024-07-22 16:53:47.972004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d770) on tqpair=0x1804120 00:40:28.348 [2024-07-22 16:53:47.972054] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:40:28.348 [2024-07-22 16:53:47.972093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1804120) 00:40:28.348 [2024-07-22 16:53:47.972117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.348 [2024-07-22 16:53:47.972129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1804120) 00:40:28.348 [2024-07-22 16:53:47.972152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.348 [2024-07-22 16:53:47.972181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d770, cid 4, qid 0 00:40:28.348 [2024-07-22 16:53:47.972194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d8d0, cid 5, qid 0 00:40:28.348 [2024-07-22 16:53:47.972390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.348 [2024-07-22 16:53:47.972405] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.348 [2024-07-22 16:53:47.972412] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972418] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1804120): datao=0, datal=1024, cccid=4 00:40:28.348 [2024-07-22 16:53:47.972425] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x185d770) on tqpair(0x1804120): expected_datao=0, payload_size=1024 00:40:28.348 [2024-07-22 16:53:47.972432] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972441] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972448] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.348 [2024-07-22 16:53:47.972465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.348 [2024-07-22 16:53:47.972471] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.348 [2024-07-22 16:53:47.972477] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d8d0) on tqpair=0x1804120 00:40:28.613 [2024-07-22 16:53:48.013137] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.613 [2024-07-22 16:53:48.013157] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.613 [2024-07-22 16:53:48.013165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d770) on tqpair=0x1804120 00:40:28.613 [2024-07-22 16:53:48.013196] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1804120) 00:40:28.613 [2024-07-22 16:53:48.013218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.613 [2024-07-22 16:53:48.013248] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d770, cid 4, qid 0 00:40:28.613 [2024-07-22 16:53:48.013395] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.613 [2024-07-22 16:53:48.013410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.613 [2024-07-22 16:53:48.013417] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013423] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1804120): datao=0, datal=3072, cccid=4 00:40:28.613 [2024-07-22 16:53:48.013430] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x185d770) on tqpair(0x1804120): expected_datao=0, payload_size=3072 00:40:28.613 [2024-07-22 16:53:48.013444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013455] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013462] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.613 [2024-07-22 16:53:48.013484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.613 [2024-07-22 16:53:48.013490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013497] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d770) on tqpair=0x1804120 00:40:28.613 [2024-07-22 16:53:48.013513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1804120) 00:40:28.613 [2024-07-22 16:53:48.013531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.613 [2024-07-22 16:53:48.013560] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d770, cid 4, qid 0 00:40:28.613 [2024-07-22 16:53:48.013674] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.613 [2024-07-22 16:53:48.013688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.613 [2024-07-22 16:53:48.013694] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013700] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1804120): datao=0, datal=8, cccid=4 00:40:28.613 [2024-07-22 16:53:48.013708] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x185d770) on tqpair(0x1804120): expected_datao=0, payload_size=8 00:40:28.613 [2024-07-22 16:53:48.013715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013724] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.013731] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.055118] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.613 [2024-07-22 16:53:48.055136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.613 [2024-07-22 16:53:48.055144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.613 [2024-07-22 16:53:48.055151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d770) on tqpair=0x1804120 00:40:28.613 ===================================================== 00:40:28.613 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:40:28.613 ===================================================== 00:40:28.613 Controller Capabilities/Features 00:40:28.613 ================================ 00:40:28.613 Vendor ID: 0000 00:40:28.613 Subsystem Vendor ID: 0000 00:40:28.613 Serial Number: .................... 00:40:28.613 Model Number: ........................................ 00:40:28.613 Firmware Version: 24.05.1 00:40:28.613 Recommended Arb Burst: 0 00:40:28.613 IEEE OUI Identifier: 00 00 00 00:40:28.613 Multi-path I/O 00:40:28.613 May have multiple subsystem ports: No 00:40:28.613 May have multiple controllers: No 00:40:28.613 Associated with SR-IOV VF: No 00:40:28.613 Max Data Transfer Size: 131072 00:40:28.613 Max Number of Namespaces: 0 00:40:28.613 Max Number of I/O Queues: 1024 00:40:28.613 NVMe Specification Version (VS): 1.3 00:40:28.613 NVMe Specification Version (Identify): 1.3 00:40:28.613 Maximum Queue Entries: 128 00:40:28.613 Contiguous Queues Required: Yes 00:40:28.613 Arbitration Mechanisms Supported 00:40:28.613 Weighted Round Robin: Not Supported 00:40:28.613 Vendor Specific: Not Supported 00:40:28.613 Reset Timeout: 15000 ms 00:40:28.613 Doorbell Stride: 4 bytes 00:40:28.613 NVM Subsystem Reset: Not Supported 00:40:28.613 Command Sets Supported 00:40:28.613 NVM Command Set: Supported 00:40:28.613 Boot Partition: Not Supported 00:40:28.613 Memory Page Size Minimum: 4096 bytes 00:40:28.614 Memory Page Size Maximum: 4096 bytes 00:40:28.614 Persistent Memory Region: Not Supported 00:40:28.614 Optional Asynchronous Events Supported 00:40:28.614 Namespace Attribute Notices: Not Supported 00:40:28.614 Firmware Activation Notices: Not Supported 00:40:28.614 ANA Change Notices: Not Supported 00:40:28.614 PLE Aggregate Log Change Notices: Not Supported 00:40:28.614 LBA Status Info Alert Notices: Not Supported 00:40:28.614 EGE Aggregate Log Change Notices: Not Supported 00:40:28.614 Normal NVM Subsystem Shutdown event: Not Supported 00:40:28.614 Zone Descriptor Change Notices: Not Supported 00:40:28.614 Discovery Log Change Notices: Supported 00:40:28.614 Controller Attributes 00:40:28.614 128-bit Host Identifier: Not Supported 00:40:28.614 Non-Operational Permissive Mode: Not Supported 00:40:28.614 NVM Sets: Not Supported 00:40:28.614 Read Recovery Levels: Not Supported 00:40:28.614 Endurance Groups: Not Supported 00:40:28.614 Predictable Latency Mode: Not Supported 00:40:28.614 Traffic Based Keep ALive: Not Supported 00:40:28.614 Namespace Granularity: Not Supported 00:40:28.614 SQ Associations: Not Supported 00:40:28.614 UUID List: Not Supported 00:40:28.614 Multi-Domain Subsystem: Not Supported 00:40:28.614 Fixed Capacity Management: Not Supported 00:40:28.614 Variable Capacity Management: Not Supported 00:40:28.614 Delete Endurance Group: Not Supported 00:40:28.614 Delete NVM Set: Not Supported 00:40:28.614 Extended LBA Formats Supported: Not Supported 00:40:28.614 Flexible Data Placement Supported: Not Supported 00:40:28.614 00:40:28.614 Controller Memory Buffer Support 00:40:28.614 ================================ 00:40:28.614 Supported: No 00:40:28.614 00:40:28.614 Persistent Memory Region Support 00:40:28.614 ================================ 00:40:28.614 Supported: No 00:40:28.614 00:40:28.614 Admin Command Set Attributes 00:40:28.614 ============================ 00:40:28.614 Security Send/Receive: Not Supported 00:40:28.614 Format NVM: Not Supported 00:40:28.614 Firmware Activate/Download: Not Supported 00:40:28.614 Namespace Management: Not Supported 00:40:28.614 Device Self-Test: Not Supported 00:40:28.614 Directives: Not Supported 00:40:28.614 NVMe-MI: Not Supported 00:40:28.614 Virtualization Management: Not Supported 00:40:28.614 Doorbell Buffer Config: Not Supported 00:40:28.614 Get LBA Status Capability: Not Supported 00:40:28.614 Command & Feature Lockdown Capability: Not Supported 00:40:28.614 Abort Command Limit: 1 00:40:28.614 Async Event Request Limit: 4 00:40:28.614 Number of Firmware Slots: N/A 00:40:28.614 Firmware Slot 1 Read-Only: N/A 00:40:28.614 Firmware Activation Without Reset: N/A 00:40:28.614 Multiple Update Detection Support: N/A 00:40:28.614 Firmware Update Granularity: No Information Provided 00:40:28.614 Per-Namespace SMART Log: No 00:40:28.614 Asymmetric Namespace Access Log Page: Not Supported 00:40:28.614 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:40:28.614 Command Effects Log Page: Not Supported 00:40:28.614 Get Log Page Extended Data: Supported 00:40:28.614 Telemetry Log Pages: Not Supported 00:40:28.614 Persistent Event Log Pages: Not Supported 00:40:28.614 Supported Log Pages Log Page: May Support 00:40:28.614 Commands Supported & Effects Log Page: Not Supported 00:40:28.614 Feature Identifiers & Effects Log Page:May Support 00:40:28.614 NVMe-MI Commands & Effects Log Page: May Support 00:40:28.614 Data Area 4 for Telemetry Log: Not Supported 00:40:28.614 Error Log Page Entries Supported: 128 00:40:28.614 Keep Alive: Not Supported 00:40:28.614 00:40:28.614 NVM Command Set Attributes 00:40:28.614 ========================== 00:40:28.614 Submission Queue Entry Size 00:40:28.614 Max: 1 00:40:28.614 Min: 1 00:40:28.614 Completion Queue Entry Size 00:40:28.614 Max: 1 00:40:28.614 Min: 1 00:40:28.614 Number of Namespaces: 0 00:40:28.614 Compare Command: Not Supported 00:40:28.614 Write Uncorrectable Command: Not Supported 00:40:28.614 Dataset Management Command: Not Supported 00:40:28.614 Write Zeroes Command: Not Supported 00:40:28.614 Set Features Save Field: Not Supported 00:40:28.614 Reservations: Not Supported 00:40:28.614 Timestamp: Not Supported 00:40:28.614 Copy: Not Supported 00:40:28.614 Volatile Write Cache: Not Present 00:40:28.614 Atomic Write Unit (Normal): 1 00:40:28.614 Atomic Write Unit (PFail): 1 00:40:28.614 Atomic Compare & Write Unit: 1 00:40:28.614 Fused Compare & Write: Supported 00:40:28.614 Scatter-Gather List 00:40:28.614 SGL Command Set: Supported 00:40:28.614 SGL Keyed: Supported 00:40:28.614 SGL Bit Bucket Descriptor: Not Supported 00:40:28.614 SGL Metadata Pointer: Not Supported 00:40:28.614 Oversized SGL: Not Supported 00:40:28.614 SGL Metadata Address: Not Supported 00:40:28.614 SGL Offset: Supported 00:40:28.614 Transport SGL Data Block: Not Supported 00:40:28.614 Replay Protected Memory Block: Not Supported 00:40:28.614 00:40:28.614 Firmware Slot Information 00:40:28.614 ========================= 00:40:28.614 Active slot: 0 00:40:28.614 00:40:28.614 00:40:28.614 Error Log 00:40:28.614 ========= 00:40:28.614 00:40:28.614 Active Namespaces 00:40:28.614 ================= 00:40:28.614 Discovery Log Page 00:40:28.614 ================== 00:40:28.614 Generation Counter: 2 00:40:28.614 Number of Records: 2 00:40:28.614 Record Format: 0 00:40:28.614 00:40:28.614 Discovery Log Entry 0 00:40:28.614 ---------------------- 00:40:28.614 Transport Type: 3 (TCP) 00:40:28.614 Address Family: 1 (IPv4) 00:40:28.614 Subsystem Type: 3 (Current Discovery Subsystem) 00:40:28.614 Entry Flags: 00:40:28.614 Duplicate Returned Information: 1 00:40:28.614 Explicit Persistent Connection Support for Discovery: 1 00:40:28.614 Transport Requirements: 00:40:28.614 Secure Channel: Not Required 00:40:28.614 Port ID: 0 (0x0000) 00:40:28.614 Controller ID: 65535 (0xffff) 00:40:28.614 Admin Max SQ Size: 128 00:40:28.614 Transport Service Identifier: 4420 00:40:28.614 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:40:28.614 Transport Address: 10.0.0.2 00:40:28.614 Discovery Log Entry 1 00:40:28.614 ---------------------- 00:40:28.614 Transport Type: 3 (TCP) 00:40:28.614 Address Family: 1 (IPv4) 00:40:28.614 Subsystem Type: 2 (NVM Subsystem) 00:40:28.614 Entry Flags: 00:40:28.614 Duplicate Returned Information: 0 00:40:28.614 Explicit Persistent Connection Support for Discovery: 0 00:40:28.614 Transport Requirements: 00:40:28.614 Secure Channel: Not Required 00:40:28.614 Port ID: 0 (0x0000) 00:40:28.614 Controller ID: 65535 (0xffff) 00:40:28.614 Admin Max SQ Size: 128 00:40:28.614 Transport Service Identifier: 4420 00:40:28.614 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:40:28.614 Transport Address: 10.0.0.2 [2024-07-22 16:53:48.055276] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:40:28.614 [2024-07-22 16:53:48.055303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.614 [2024-07-22 16:53:48.055315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.614 [2024-07-22 16:53:48.055324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.614 [2024-07-22 16:53:48.055333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.614 [2024-07-22 16:53:48.055351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1804120) 00:40:28.614 [2024-07-22 16:53:48.055377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.614 [2024-07-22 16:53:48.055402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d610, cid 3, qid 0 00:40:28.614 [2024-07-22 16:53:48.055585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.614 [2024-07-22 16:53:48.055597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.614 [2024-07-22 16:53:48.055608] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d610) on tqpair=0x1804120 00:40:28.614 [2024-07-22 16:53:48.055628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1804120) 00:40:28.614 [2024-07-22 16:53:48.055652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.614 [2024-07-22 16:53:48.055677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d610, cid 3, qid 0 00:40:28.614 [2024-07-22 16:53:48.055801] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.614 [2024-07-22 16:53:48.055812] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.614 [2024-07-22 16:53:48.055819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055825] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d610) on tqpair=0x1804120 00:40:28.614 [2024-07-22 16:53:48.055835] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:40:28.614 [2024-07-22 16:53:48.055843] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:40:28.614 [2024-07-22 16:53:48.055859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.614 [2024-07-22 16:53:48.055873] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1804120) 00:40:28.614 [2024-07-22 16:53:48.055883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.614 [2024-07-22 16:53:48.055903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d610, cid 3, qid 0 00:40:28.614 [2024-07-22 16:53:48.059995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.060011] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.060018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.060025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d610) on tqpair=0x1804120 00:40:28.615 [2024-07-22 16:53:48.060047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.060056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.060063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1804120) 00:40:28.615 [2024-07-22 16:53:48.060073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.060096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x185d610, cid 3, qid 0 00:40:28.615 [2024-07-22 16:53:48.060288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.060303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.060309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.060316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x185d610) on tqpair=0x1804120 00:40:28.615 [2024-07-22 16:53:48.060330] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:40:28.615 00:40:28.615 16:53:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:40:28.615 [2024-07-22 16:53:48.094279] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:28.615 [2024-07-22 16:53:48.094325] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897579 ] 00:40:28.615 EAL: No free 2048 kB hugepages reported on node 1 00:40:28.615 [2024-07-22 16:53:48.131789] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:40:28.615 [2024-07-22 16:53:48.131839] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:40:28.615 [2024-07-22 16:53:48.131848] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:40:28.615 [2024-07-22 16:53:48.131862] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:40:28.615 [2024-07-22 16:53:48.131874] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:40:28.615 [2024-07-22 16:53:48.135004] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:40:28.615 [2024-07-22 16:53:48.135057] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2335120 0 00:40:28.615 [2024-07-22 16:53:48.142976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:40:28.615 [2024-07-22 16:53:48.142997] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:40:28.615 [2024-07-22 16:53:48.143004] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:40:28.615 [2024-07-22 16:53:48.143011] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:40:28.615 [2024-07-22 16:53:48.143055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.143066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.143073] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.143097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:28.615 [2024-07-22 16:53:48.143123] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.151008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.151025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.151033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151040] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.151059] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:40:28.615 [2024-07-22 16:53:48.151070] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:40:28.615 [2024-07-22 16:53:48.151079] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:40:28.615 [2024-07-22 16:53:48.151106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151115] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.151133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.151157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.151401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.151417] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.151424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.151447] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:40:28.615 [2024-07-22 16:53:48.151462] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:40:28.615 [2024-07-22 16:53:48.151474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.151498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.151520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.151694] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.151709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.151715] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151722] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.151731] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:40:28.615 [2024-07-22 16:53:48.151745] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.151757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.151771] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.151781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.151801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.151996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.152013] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.152020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152026] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.152036] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.152054] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.152080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.152102] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.152314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.152326] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.152333] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.152348] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:40:28.615 [2024-07-22 16:53:48.152360] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.152374] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.152486] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:40:28.615 [2024-07-22 16:53:48.152493] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.152505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152512] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152518] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.152528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.152549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.152753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.152765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.152772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152778] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.152787] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:40:28.615 [2024-07-22 16:53:48.152803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152811] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.152818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.152828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.152848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.152989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.153006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.153012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.153028] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:40:28.615 [2024-07-22 16:53:48.153037] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:40:28.615 [2024-07-22 16:53:48.153051] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:40:28.615 [2024-07-22 16:53:48.153064] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:40:28.615 [2024-07-22 16:53:48.153079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.615 [2024-07-22 16:53:48.153099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.615 [2024-07-22 16:53:48.153127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.615 [2024-07-22 16:53:48.153404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.615 [2024-07-22 16:53:48.153420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.615 [2024-07-22 16:53:48.153427] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153434] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=4096, cccid=0 00:40:28.615 [2024-07-22 16:53:48.153441] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e1f0) on tqpair(0x2335120): expected_datao=0, payload_size=4096 00:40:28.615 [2024-07-22 16:53:48.153449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153459] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153466] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153477] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.615 [2024-07-22 16:53:48.153487] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.615 [2024-07-22 16:53:48.153493] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153500] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.615 [2024-07-22 16:53:48.153515] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:40:28.615 [2024-07-22 16:53:48.153524] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:40:28.615 [2024-07-22 16:53:48.153532] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:40:28.615 [2024-07-22 16:53:48.153538] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:40:28.615 [2024-07-22 16:53:48.153546] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:40:28.615 [2024-07-22 16:53:48.153553] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:40:28.615 [2024-07-22 16:53:48.153567] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:40:28.615 [2024-07-22 16:53:48.153579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.615 [2024-07-22 16:53:48.153593] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.153603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:28.616 [2024-07-22 16:53:48.153624] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.616 [2024-07-22 16:53:48.153826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.153841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.153847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e1f0) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.153865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.153888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.616 [2024-07-22 16:53:48.153898] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.153919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.616 [2024-07-22 16:53:48.153932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.153979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.616 [2024-07-22 16:53:48.153990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.153997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.154013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.616 [2024-07-22 16:53:48.154022] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154042] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154055] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.154073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.154097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e1f0, cid 0, qid 0 00:40:28.616 [2024-07-22 16:53:48.154108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e350, cid 1, qid 0 00:40:28.616 [2024-07-22 16:53:48.154117] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e4b0, cid 2, qid 0 00:40:28.616 [2024-07-22 16:53:48.154125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.616 [2024-07-22 16:53:48.154133] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.154401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.154413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.154420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.154435] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:40:28.616 [2024-07-22 16:53:48.154451] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154465] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154477] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154487] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154494] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154501] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.154511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:28.616 [2024-07-22 16:53:48.154531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.154732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.154750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.154757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.154829] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154848] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.154863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.154870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.154881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.154902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.158978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.616 [2024-07-22 16:53:48.158995] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.616 [2024-07-22 16:53:48.159002] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.159008] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=4096, cccid=4 00:40:28.616 [2024-07-22 16:53:48.159016] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e770) on tqpair(0x2335120): expected_datao=0, payload_size=4096 00:40:28.616 [2024-07-22 16:53:48.159024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.159034] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.159041] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.198974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.198993] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.199001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.199029] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:40:28.616 [2024-07-22 16:53:48.199051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.199071] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.199085] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.199104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.199128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.199347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.616 [2024-07-22 16:53:48.199362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.616 [2024-07-22 16:53:48.199369] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199375] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=4096, cccid=4 00:40:28.616 [2024-07-22 16:53:48.199383] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e770) on tqpair(0x2335120): expected_datao=0, payload_size=4096 00:40:28.616 [2024-07-22 16:53:48.199394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199449] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199458] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.199623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.199630] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.199661] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.199679] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.199692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.199710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.199732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.199915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.616 [2024-07-22 16:53:48.199930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.616 [2024-07-22 16:53:48.199936] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199943] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=4096, cccid=4 00:40:28.616 [2024-07-22 16:53:48.199971] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e770) on tqpair(0x2335120): expected_datao=0, payload_size=4096 00:40:28.616 [2024-07-22 16:53:48.199979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.199998] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200007] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200127] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.200140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.200147] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200154] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.200170] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200185] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200202] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200215] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200223] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200232] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:40:28.616 [2024-07-22 16:53:48.200249] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:40:28.616 [2024-07-22 16:53:48.200271] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:40:28.616 [2024-07-22 16:53:48.200298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200307] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.200318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.200344] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.200367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:40:28.616 [2024-07-22 16:53:48.200392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.616 [2024-07-22 16:53:48.200403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e8d0, cid 5, qid 0 00:40:28.616 [2024-07-22 16:53:48.200581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.200596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.200603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200609] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.200620] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.200629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.200635] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200642] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e8d0) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.200658] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2335120) 00:40:28.616 [2024-07-22 16:53:48.200677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.616 [2024-07-22 16:53:48.200698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e8d0, cid 5, qid 0 00:40:28.616 [2024-07-22 16:53:48.200904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.616 [2024-07-22 16:53:48.200918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.616 [2024-07-22 16:53:48.200925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.616 [2024-07-22 16:53:48.200931] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e8d0) on tqpair=0x2335120 00:40:28.616 [2024-07-22 16:53:48.200948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.200957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.200992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e8d0, cid 5, qid 0 00:40:28.617 [2024-07-22 16:53:48.201224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.201239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.201246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e8d0) on tqpair=0x2335120 00:40:28.617 [2024-07-22 16:53:48.201270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.201303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e8d0, cid 5, qid 0 00:40:28.617 [2024-07-22 16:53:48.201487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.201499] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.201506] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e8d0) on tqpair=0x2335120 00:40:28.617 [2024-07-22 16:53:48.201531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.201551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.201578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.201605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201616] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2335120) 00:40:28.617 [2024-07-22 16:53:48.201632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.617 [2024-07-22 16:53:48.201653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e8d0, cid 5, qid 0 00:40:28.617 [2024-07-22 16:53:48.201664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e770, cid 4, qid 0 00:40:28.617 [2024-07-22 16:53:48.201672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238ea30, cid 6, qid 0 00:40:28.617 [2024-07-22 16:53:48.201679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238eb90, cid 7, qid 0 00:40:28.617 [2024-07-22 16:53:48.201919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.617 [2024-07-22 16:53:48.201934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.617 [2024-07-22 16:53:48.201941] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.201947] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=8192, cccid=5 00:40:28.617 [2024-07-22 16:53:48.201954] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e8d0) on tqpair(0x2335120): expected_datao=0, payload_size=8192 00:40:28.617 [2024-07-22 16:53:48.201962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202008] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202018] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.617 [2024-07-22 16:53:48.202036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.617 [2024-07-22 16:53:48.202043] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202049] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=512, cccid=4 00:40:28.617 [2024-07-22 16:53:48.202060] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e770) on tqpair(0x2335120): expected_datao=0, payload_size=512 00:40:28.617 [2024-07-22 16:53:48.202068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202077] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202085] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.617 [2024-07-22 16:53:48.202102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.617 [2024-07-22 16:53:48.202108] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202115] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=512, cccid=6 00:40:28.617 [2024-07-22 16:53:48.202122] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238ea30) on tqpair(0x2335120): expected_datao=0, payload_size=512 00:40:28.617 [2024-07-22 16:53:48.202129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202138] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202145] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:28.617 [2024-07-22 16:53:48.202162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:28.617 [2024-07-22 16:53:48.202169] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202175] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2335120): datao=0, datal=4096, cccid=7 00:40:28.617 [2024-07-22 16:53:48.202182] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238eb90) on tqpair(0x2335120): expected_datao=0, payload_size=4096 00:40:28.617 [2024-07-22 16:53:48.202189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202199] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202206] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.202227] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.202233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e8d0) on tqpair=0x2335120 00:40:28.617 [2024-07-22 16:53:48.202259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.202285] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.202292] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e770) on tqpair=0x2335120 00:40:28.617 [2024-07-22 16:53:48.202313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.202323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.202329] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238ea30) on tqpair=0x2335120 00:40:28.617 [2024-07-22 16:53:48.202350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.617 [2024-07-22 16:53:48.202360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.617 [2024-07-22 16:53:48.202367] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.617 [2024-07-22 16:53:48.202373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238eb90) on tqpair=0x2335120 00:40:28.617 ===================================================== 00:40:28.617 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:28.617 ===================================================== 00:40:28.617 Controller Capabilities/Features 00:40:28.617 ================================ 00:40:28.617 Vendor ID: 8086 00:40:28.617 Subsystem Vendor ID: 8086 00:40:28.617 Serial Number: SPDK00000000000001 00:40:28.617 Model Number: SPDK bdev Controller 00:40:28.617 Firmware Version: 24.05.1 00:40:28.617 Recommended Arb Burst: 6 00:40:28.617 IEEE OUI Identifier: e4 d2 5c 00:40:28.617 Multi-path I/O 00:40:28.617 May have multiple subsystem ports: Yes 00:40:28.617 May have multiple controllers: Yes 00:40:28.617 Associated with SR-IOV VF: No 00:40:28.617 Max Data Transfer Size: 131072 00:40:28.618 Max Number of Namespaces: 32 00:40:28.618 Max Number of I/O Queues: 127 00:40:28.618 NVMe Specification Version (VS): 1.3 00:40:28.618 NVMe Specification Version (Identify): 1.3 00:40:28.618 Maximum Queue Entries: 128 00:40:28.618 Contiguous Queues Required: Yes 00:40:28.618 Arbitration Mechanisms Supported 00:40:28.618 Weighted Round Robin: Not Supported 00:40:28.618 Vendor Specific: Not Supported 00:40:28.618 Reset Timeout: 15000 ms 00:40:28.618 Doorbell Stride: 4 bytes 00:40:28.618 NVM Subsystem Reset: Not Supported 00:40:28.618 Command Sets Supported 00:40:28.618 NVM Command Set: Supported 00:40:28.618 Boot Partition: Not Supported 00:40:28.618 Memory Page Size Minimum: 4096 bytes 00:40:28.618 Memory Page Size Maximum: 4096 bytes 00:40:28.618 Persistent Memory Region: Not Supported 00:40:28.618 Optional Asynchronous Events Supported 00:40:28.618 Namespace Attribute Notices: Supported 00:40:28.618 Firmware Activation Notices: Not Supported 00:40:28.618 ANA Change Notices: Not Supported 00:40:28.618 PLE Aggregate Log Change Notices: Not Supported 00:40:28.618 LBA Status Info Alert Notices: Not Supported 00:40:28.618 EGE Aggregate Log Change Notices: Not Supported 00:40:28.618 Normal NVM Subsystem Shutdown event: Not Supported 00:40:28.618 Zone Descriptor Change Notices: Not Supported 00:40:28.618 Discovery Log Change Notices: Not Supported 00:40:28.618 Controller Attributes 00:40:28.618 128-bit Host Identifier: Supported 00:40:28.618 Non-Operational Permissive Mode: Not Supported 00:40:28.618 NVM Sets: Not Supported 00:40:28.618 Read Recovery Levels: Not Supported 00:40:28.618 Endurance Groups: Not Supported 00:40:28.618 Predictable Latency Mode: Not Supported 00:40:28.618 Traffic Based Keep ALive: Not Supported 00:40:28.618 Namespace Granularity: Not Supported 00:40:28.618 SQ Associations: Not Supported 00:40:28.618 UUID List: Not Supported 00:40:28.618 Multi-Domain Subsystem: Not Supported 00:40:28.618 Fixed Capacity Management: Not Supported 00:40:28.618 Variable Capacity Management: Not Supported 00:40:28.618 Delete Endurance Group: Not Supported 00:40:28.618 Delete NVM Set: Not Supported 00:40:28.618 Extended LBA Formats Supported: Not Supported 00:40:28.618 Flexible Data Placement Supported: Not Supported 00:40:28.618 00:40:28.618 Controller Memory Buffer Support 00:40:28.618 ================================ 00:40:28.618 Supported: No 00:40:28.618 00:40:28.618 Persistent Memory Region Support 00:40:28.618 ================================ 00:40:28.618 Supported: No 00:40:28.618 00:40:28.618 Admin Command Set Attributes 00:40:28.618 ============================ 00:40:28.618 Security Send/Receive: Not Supported 00:40:28.618 Format NVM: Not Supported 00:40:28.618 Firmware Activate/Download: Not Supported 00:40:28.618 Namespace Management: Not Supported 00:40:28.618 Device Self-Test: Not Supported 00:40:28.618 Directives: Not Supported 00:40:28.618 NVMe-MI: Not Supported 00:40:28.618 Virtualization Management: Not Supported 00:40:28.618 Doorbell Buffer Config: Not Supported 00:40:28.618 Get LBA Status Capability: Not Supported 00:40:28.618 Command & Feature Lockdown Capability: Not Supported 00:40:28.618 Abort Command Limit: 4 00:40:28.618 Async Event Request Limit: 4 00:40:28.618 Number of Firmware Slots: N/A 00:40:28.618 Firmware Slot 1 Read-Only: N/A 00:40:28.618 Firmware Activation Without Reset: N/A 00:40:28.618 Multiple Update Detection Support: N/A 00:40:28.618 Firmware Update Granularity: No Information Provided 00:40:28.618 Per-Namespace SMART Log: No 00:40:28.618 Asymmetric Namespace Access Log Page: Not Supported 00:40:28.618 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:40:28.618 Command Effects Log Page: Supported 00:40:28.618 Get Log Page Extended Data: Supported 00:40:28.618 Telemetry Log Pages: Not Supported 00:40:28.618 Persistent Event Log Pages: Not Supported 00:40:28.618 Supported Log Pages Log Page: May Support 00:40:28.618 Commands Supported & Effects Log Page: Not Supported 00:40:28.618 Feature Identifiers & Effects Log Page:May Support 00:40:28.618 NVMe-MI Commands & Effects Log Page: May Support 00:40:28.618 Data Area 4 for Telemetry Log: Not Supported 00:40:28.618 Error Log Page Entries Supported: 128 00:40:28.618 Keep Alive: Supported 00:40:28.618 Keep Alive Granularity: 10000 ms 00:40:28.618 00:40:28.618 NVM Command Set Attributes 00:40:28.618 ========================== 00:40:28.618 Submission Queue Entry Size 00:40:28.618 Max: 64 00:40:28.618 Min: 64 00:40:28.618 Completion Queue Entry Size 00:40:28.618 Max: 16 00:40:28.618 Min: 16 00:40:28.618 Number of Namespaces: 32 00:40:28.618 Compare Command: Supported 00:40:28.618 Write Uncorrectable Command: Not Supported 00:40:28.618 Dataset Management Command: Supported 00:40:28.618 Write Zeroes Command: Supported 00:40:28.618 Set Features Save Field: Not Supported 00:40:28.618 Reservations: Supported 00:40:28.618 Timestamp: Not Supported 00:40:28.618 Copy: Supported 00:40:28.618 Volatile Write Cache: Present 00:40:28.618 Atomic Write Unit (Normal): 1 00:40:28.618 Atomic Write Unit (PFail): 1 00:40:28.618 Atomic Compare & Write Unit: 1 00:40:28.618 Fused Compare & Write: Supported 00:40:28.618 Scatter-Gather List 00:40:28.618 SGL Command Set: Supported 00:40:28.618 SGL Keyed: Supported 00:40:28.618 SGL Bit Bucket Descriptor: Not Supported 00:40:28.618 SGL Metadata Pointer: Not Supported 00:40:28.618 Oversized SGL: Not Supported 00:40:28.618 SGL Metadata Address: Not Supported 00:40:28.618 SGL Offset: Supported 00:40:28.618 Transport SGL Data Block: Not Supported 00:40:28.618 Replay Protected Memory Block: Not Supported 00:40:28.618 00:40:28.618 Firmware Slot Information 00:40:28.618 ========================= 00:40:28.618 Active slot: 1 00:40:28.618 Slot 1 Firmware Revision: 24.05.1 00:40:28.618 00:40:28.618 00:40:28.618 Commands Supported and Effects 00:40:28.618 ============================== 00:40:28.618 Admin Commands 00:40:28.618 -------------- 00:40:28.618 Get Log Page (02h): Supported 00:40:28.618 Identify (06h): Supported 00:40:28.618 Abort (08h): Supported 00:40:28.618 Set Features (09h): Supported 00:40:28.618 Get Features (0Ah): Supported 00:40:28.618 Asynchronous Event Request (0Ch): Supported 00:40:28.618 Keep Alive (18h): Supported 00:40:28.618 I/O Commands 00:40:28.618 ------------ 00:40:28.618 Flush (00h): Supported LBA-Change 00:40:28.618 Write (01h): Supported LBA-Change 00:40:28.618 Read (02h): Supported 00:40:28.618 Compare (05h): Supported 00:40:28.618 Write Zeroes (08h): Supported LBA-Change 00:40:28.618 Dataset Management (09h): Supported LBA-Change 00:40:28.618 Copy (19h): Supported LBA-Change 00:40:28.618 Unknown (79h): Supported LBA-Change 00:40:28.618 Unknown (7Ah): Supported 00:40:28.618 00:40:28.618 Error Log 00:40:28.618 ========= 00:40:28.618 00:40:28.618 Arbitration 00:40:28.618 =========== 00:40:28.618 Arbitration Burst: 1 00:40:28.618 00:40:28.618 Power Management 00:40:28.618 ================ 00:40:28.618 Number of Power States: 1 00:40:28.618 Current Power State: Power State #0 00:40:28.618 Power State #0: 00:40:28.618 Max Power: 0.00 W 00:40:28.618 Non-Operational State: Operational 00:40:28.618 Entry Latency: Not Reported 00:40:28.618 Exit Latency: Not Reported 00:40:28.618 Relative Read Throughput: 0 00:40:28.618 Relative Read Latency: 0 00:40:28.618 Relative Write Throughput: 0 00:40:28.618 Relative Write Latency: 0 00:40:28.618 Idle Power: Not Reported 00:40:28.618 Active Power: Not Reported 00:40:28.618 Non-Operational Permissive Mode: Not Supported 00:40:28.618 00:40:28.618 Health Information 00:40:28.618 ================== 00:40:28.618 Critical Warnings: 00:40:28.618 Available Spare Space: OK 00:40:28.618 Temperature: OK 00:40:28.618 Device Reliability: OK 00:40:28.618 Read Only: No 00:40:28.618 Volatile Memory Backup: OK 00:40:28.618 Current Temperature: 0 Kelvin (-273 Celsius) 00:40:28.618 Temperature Threshold: [2024-07-22 16:53:48.202485] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.202497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2335120) 00:40:28.618 [2024-07-22 16:53:48.202510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.618 [2024-07-22 16:53:48.202533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238eb90, cid 7, qid 0 00:40:28.618 [2024-07-22 16:53:48.202735] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.618 [2024-07-22 16:53:48.202750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.618 [2024-07-22 16:53:48.202756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.202763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238eb90) on tqpair=0x2335120 00:40:28.618 [2024-07-22 16:53:48.202801] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:40:28.618 [2024-07-22 16:53:48.202824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.618 [2024-07-22 16:53:48.202835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.618 [2024-07-22 16:53:48.202844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.618 [2024-07-22 16:53:48.202853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.618 [2024-07-22 16:53:48.202865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.202873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.202879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.618 [2024-07-22 16:53:48.202889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.618 [2024-07-22 16:53:48.202911] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.618 [2024-07-22 16:53:48.206981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.618 [2024-07-22 16:53:48.207008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.618 [2024-07-22 16:53:48.207016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.618 [2024-07-22 16:53:48.207036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.618 [2024-07-22 16:53:48.207061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.618 [2024-07-22 16:53:48.207090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.618 [2024-07-22 16:53:48.207315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.618 [2024-07-22 16:53:48.207330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.618 [2024-07-22 16:53:48.207336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.618 [2024-07-22 16:53:48.207352] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:40:28.618 [2024-07-22 16:53:48.207360] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:40:28.618 [2024-07-22 16:53:48.207376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.618 [2024-07-22 16:53:48.207391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.618 [2024-07-22 16:53:48.207402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.207426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.207637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.207652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.207659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.207665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.207683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.207692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.207698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.207708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.207728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.207929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.207958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.207974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.207982] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.208001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.208028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.208050] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.208245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.208275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.208282] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208289] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.208307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.208332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.208352] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.208513] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.208524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.208531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208537] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.208554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208563] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.208579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.208602] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.208754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.208766] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.208772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.208795] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208804] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.208811] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.208821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.208841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.208994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.209008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.209015] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209022] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.209040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209055] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.209066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.209086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.209227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.209256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.209263] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.209288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.209314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.209335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.209489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.209501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.209507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.209530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209539] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209545] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.209555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.209575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.209727] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.209742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.209749] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.209772] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.209788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.209798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.209828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.210001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.210017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.210024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.210049] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210065] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.210075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.210104] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.210290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.210305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.210312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.210336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.210362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.210382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.210514] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.210526] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.210532] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210539] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.210555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.210581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.210600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.210755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.210770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.210776] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.210800] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210809] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.210815] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.210826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.210846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.214980] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.214996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.215004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.215011] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.215029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.215039] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.215045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2335120) 00:40:28.619 [2024-07-22 16:53:48.215056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.619 [2024-07-22 16:53:48.215077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e610, cid 3, qid 0 00:40:28.619 [2024-07-22 16:53:48.215261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:28.619 [2024-07-22 16:53:48.215275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:28.619 [2024-07-22 16:53:48.215297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:28.619 [2024-07-22 16:53:48.215304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x238e610) on tqpair=0x2335120 00:40:28.619 [2024-07-22 16:53:48.215318] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:40:28.619 0 Kelvin (-273 Celsius) 00:40:28.619 Available Spare: 0% 00:40:28.619 Available Spare Threshold: 0% 00:40:28.620 Life Percentage Used: 0% 00:40:28.620 Data Units Read: 0 00:40:28.620 Data Units Written: 0 00:40:28.620 Host Read Commands: 0 00:40:28.620 Host Write Commands: 0 00:40:28.620 Controller Busy Time: 0 minutes 00:40:28.620 Power Cycles: 0 00:40:28.620 Power On Hours: 0 hours 00:40:28.620 Unsafe Shutdowns: 0 00:40:28.620 Unrecoverable Media Errors: 0 00:40:28.620 Lifetime Error Log Entries: 0 00:40:28.620 Warning Temperature Time: 0 minutes 00:40:28.620 Critical Temperature Time: 0 minutes 00:40:28.620 00:40:28.620 Number of Queues 00:40:28.620 ================ 00:40:28.620 Number of I/O Submission Queues: 127 00:40:28.620 Number of I/O Completion Queues: 127 00:40:28.620 00:40:28.620 Active Namespaces 00:40:28.620 ================= 00:40:28.620 Namespace ID:1 00:40:28.620 Error Recovery Timeout: Unlimited 00:40:28.620 Command Set Identifier: NVM (00h) 00:40:28.620 Deallocate: Supported 00:40:28.620 Deallocated/Unwritten Error: Not Supported 00:40:28.620 Deallocated Read Value: Unknown 00:40:28.620 Deallocate in Write Zeroes: Not Supported 00:40:28.620 Deallocated Guard Field: 0xFFFF 00:40:28.620 Flush: Supported 00:40:28.620 Reservation: Supported 00:40:28.620 Namespace Sharing Capabilities: Multiple Controllers 00:40:28.620 Size (in LBAs): 131072 (0GiB) 00:40:28.620 Capacity (in LBAs): 131072 (0GiB) 00:40:28.620 Utilization (in LBAs): 131072 (0GiB) 00:40:28.620 NGUID: ABCDEF0123456789ABCDEF0123456789 00:40:28.620 EUI64: ABCDEF0123456789 00:40:28.620 UUID: a839c61b-f9b8-407d-ad2e-126e97872fec 00:40:28.620 Thin Provisioning: Not Supported 00:40:28.620 Per-NS Atomic Units: Yes 00:40:28.620 Atomic Boundary Size (Normal): 0 00:40:28.620 Atomic Boundary Size (PFail): 0 00:40:28.620 Atomic Boundary Offset: 0 00:40:28.620 Maximum Single Source Range Length: 65535 00:40:28.620 Maximum Copy Length: 65535 00:40:28.620 Maximum Source Range Count: 1 00:40:28.620 NGUID/EUI64 Never Reused: No 00:40:28.620 Namespace Write Protected: No 00:40:28.620 Number of LBA Formats: 1 00:40:28.620 Current LBA Format: LBA Format #00 00:40:28.620 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:28.620 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:28.620 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:28.620 rmmod nvme_tcp 00:40:28.879 rmmod nvme_fabrics 00:40:28.879 rmmod nvme_keyring 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2897436 ']' 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2897436 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 2897436 ']' 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 2897436 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2897436 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2897436' 00:40:28.879 killing process with pid 2897436 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 2897436 00:40:28.879 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 2897436 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:29.138 16:53:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.052 16:53:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:31.052 00:40:31.052 real 0m5.808s 00:40:31.052 user 0m4.618s 00:40:31.052 sys 0m2.141s 00:40:31.052 16:53:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:31.052 16:53:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:40:31.052 ************************************ 00:40:31.052 END TEST nvmf_identify 00:40:31.052 ************************************ 00:40:31.052 16:53:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:40:31.052 16:53:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:31.052 16:53:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:31.052 16:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:31.052 ************************************ 00:40:31.052 START TEST nvmf_perf 00:40:31.052 ************************************ 00:40:31.052 16:53:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:40:31.311 * Looking for test storage... 00:40:31.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:40:31.311 16:53:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:40:33.879 Found 0000:82:00.0 (0x8086 - 0x159b) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:40:33.879 Found 0000:82:00.1 (0x8086 - 0x159b) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:40:33.879 Found net devices under 0000:82:00.0: cvl_0_0 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:40:33.879 Found net devices under 0000:82:00.1: cvl_0_1 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:33.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:40:33.879 00:40:33.879 --- 10.0.0.2 ping statistics --- 00:40:33.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.879 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:40:33.879 00:40:33.879 --- 10.0.0.1 ping statistics --- 00:40:33.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.879 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:33.879 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2899809 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2899809 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 2899809 ']' 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:33.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:33.880 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:40:33.880 [2024-07-22 16:53:53.478215] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:33.880 [2024-07-22 16:53:53.478296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:34.164 EAL: No free 2048 kB hugepages reported on node 1 00:40:34.164 [2024-07-22 16:53:53.557353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:34.164 [2024-07-22 16:53:53.650740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:34.164 [2024-07-22 16:53:53.650795] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:34.164 [2024-07-22 16:53:53.650820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:34.164 [2024-07-22 16:53:53.650835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:34.164 [2024-07-22 16:53:53.650846] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:34.164 [2024-07-22 16:53:53.650948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:34.164 [2024-07-22 16:53:53.651038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:34.164 [2024-07-22 16:53:53.651041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.164 [2024-07-22 16:53:53.650994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:34.164 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:34.164 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:40:34.164 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:34.164 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:34.164 16:53:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:40:34.421 16:53:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:34.421 16:53:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:34.421 16:53:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:40:37.698 16:53:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:40:37.698 16:53:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:40:37.698 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:40:37.698 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:37.956 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:40:37.956 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:40:37.956 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:40:37.956 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:40:37.956 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:40:38.214 [2024-07-22 16:53:57.622704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.214 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:38.471 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:40:38.471 16:53:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:38.729 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:40:38.729 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:38.729 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:38.986 [2024-07-22 16:53:58.598333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.986 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:39.244 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:40:39.244 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:40:39.244 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:40:39.244 16:53:58 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:40:40.616 Initializing NVMe Controllers 00:40:40.616 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:40:40.616 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:40:40.616 Initialization complete. Launching workers. 00:40:40.616 ======================================================== 00:40:40.616 Latency(us) 00:40:40.616 Device Information : IOPS MiB/s Average min max 00:40:40.616 PCIE (0000:81:00.0) NSID 1 from core 0: 84428.52 329.80 378.25 32.74 4734.85 00:40:40.616 ======================================================== 00:40:40.616 Total : 84428.52 329.80 378.25 32.74 4734.85 00:40:40.616 00:40:40.616 16:54:00 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:40.616 EAL: No free 2048 kB hugepages reported on node 1 00:40:41.987 Initializing NVMe Controllers 00:40:41.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:41.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:41.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:41.987 Initialization complete. Launching workers. 00:40:41.987 ======================================================== 00:40:41.987 Latency(us) 00:40:41.987 Device Information : IOPS MiB/s Average min max 00:40:41.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11891.88 160.70 45136.84 00:40:41.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19754.81 6963.83 54866.29 00:40:41.987 ======================================================== 00:40:41.987 Total : 138.00 0.54 14797.74 160.70 54866.29 00:40:41.987 00:40:41.987 16:54:01 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:41.987 EAL: No free 2048 kB hugepages reported on node 1 00:40:43.886 Initializing NVMe Controllers 00:40:43.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:43.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:43.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:43.886 Initialization complete. Launching workers. 00:40:43.886 ======================================================== 00:40:43.886 Latency(us) 00:40:43.886 Device Information : IOPS MiB/s Average min max 00:40:43.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8711.99 34.03 3687.61 523.81 7424.69 00:40:43.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.00 15.03 8353.89 6085.22 15909.98 00:40:43.886 ======================================================== 00:40:43.886 Total : 12559.99 49.06 5117.22 523.81 15909.98 00:40:43.886 00:40:43.886 16:54:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:40:43.886 16:54:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:40:43.886 16:54:03 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:43.886 EAL: No free 2048 kB hugepages reported on node 1 00:40:46.414 Initializing NVMe Controllers 00:40:46.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:46.414 Controller IO queue size 128, less than required. 00:40:46.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:46.414 Controller IO queue size 128, less than required. 00:40:46.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:46.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:46.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:46.414 Initialization complete. Launching workers. 00:40:46.414 ======================================================== 00:40:46.414 Latency(us) 00:40:46.414 Device Information : IOPS MiB/s Average min max 00:40:46.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1232.95 308.24 106454.08 51517.84 176469.14 00:40:46.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.98 147.99 224445.98 85825.62 352228.18 00:40:46.414 ======================================================== 00:40:46.414 Total : 1824.93 456.23 144728.71 51517.84 352228.18 00:40:46.414 00:40:46.414 16:54:05 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:40:46.414 EAL: No free 2048 kB hugepages reported on node 1 00:40:46.414 No valid NVMe controllers or AIO or URING devices found 00:40:46.414 Initializing NVMe Controllers 00:40:46.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:46.414 Controller IO queue size 128, less than required. 00:40:46.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:46.414 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:40:46.414 Controller IO queue size 128, less than required. 00:40:46.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:46.414 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:40:46.414 WARNING: Some requested NVMe devices were skipped 00:40:46.414 16:54:05 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:40:46.414 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.941 Initializing NVMe Controllers 00:40:48.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:48.941 Controller IO queue size 128, less than required. 00:40:48.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:48.941 Controller IO queue size 128, less than required. 00:40:48.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:48.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:48.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:48.941 Initialization complete. Launching workers. 00:40:48.941 00:40:48.941 ==================== 00:40:48.941 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:40:48.941 TCP transport: 00:40:48.941 polls: 11355 00:40:48.941 idle_polls: 6209 00:40:48.941 sock_completions: 5146 00:40:48.941 nvme_completions: 4957 00:40:48.941 submitted_requests: 7400 00:40:48.941 queued_requests: 1 00:40:48.941 00:40:48.941 ==================== 00:40:48.941 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:40:48.941 TCP transport: 00:40:48.941 polls: 11679 00:40:48.941 idle_polls: 6099 00:40:48.941 sock_completions: 5580 00:40:48.941 nvme_completions: 5317 00:40:48.941 submitted_requests: 8000 00:40:48.941 queued_requests: 1 00:40:48.941 ======================================================== 00:40:48.941 Latency(us) 00:40:48.941 Device Information : IOPS MiB/s Average min max 00:40:48.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1236.47 309.12 105776.05 66231.15 184356.67 00:40:48.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1326.28 331.57 97220.70 48613.02 152219.92 00:40:48.941 ======================================================== 00:40:48.941 Total : 2562.75 640.69 101348.46 48613.02 184356.67 00:40:48.941 00:40:48.941 16:54:08 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:40:48.941 16:54:08 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:48.941 16:54:08 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:40:48.941 16:54:08 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:81:00.0 ']' 00:40:48.941 16:54:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4166548f-ede2-4e15-bb0f-1b38594c8fff 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4166548f-ede2-4e15-bb0f-1b38594c8fff 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=4166548f-ede2-4e15-bb0f-1b38594c8fff 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:40:55.493 { 00:40:55.493 "uuid": "4166548f-ede2-4e15-bb0f-1b38594c8fff", 00:40:55.493 "name": "lvs_0", 00:40:55.493 "base_bdev": "Nvme0n1", 00:40:55.493 "total_data_clusters": 476466, 00:40:55.493 "free_clusters": 476466, 00:40:55.493 "block_size": 512, 00:40:55.493 "cluster_size": 4194304 00:40:55.493 } 00:40:55.493 ]' 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="4166548f-ede2-4e15-bb0f-1b38594c8fff") .free_clusters' 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=476466 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4166548f-ede2-4e15-bb0f-1b38594c8fff") .cluster_size' 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=1905864 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 1905864 00:40:55.493 1905864 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:40:55.493 16:54:14 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4166548f-ede2-4e15-bb0f-1b38594c8fff lbd_0 20480 00:40:56.058 16:54:15 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=44314d12-dbba-4b3a-90ff-f5556e35032d 00:40:56.058 16:54:15 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 44314d12-dbba-4b3a-90ff-f5556e35032d lvs_n_0 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=87d5a499-61ea-4a6c-bccc-dfd70d9df2cc 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 87d5a499-61ea-4a6c-bccc-dfd70d9df2cc 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=87d5a499-61ea-4a6c-bccc-dfd70d9df2cc 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:40:57.956 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:58.213 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:40:58.213 { 00:40:58.213 "uuid": "4166548f-ede2-4e15-bb0f-1b38594c8fff", 00:40:58.213 "name": "lvs_0", 00:40:58.213 "base_bdev": "Nvme0n1", 00:40:58.213 "total_data_clusters": 476466, 00:40:58.213 "free_clusters": 471346, 00:40:58.213 "block_size": 512, 00:40:58.213 "cluster_size": 4194304 00:40:58.213 }, 00:40:58.213 { 00:40:58.213 "uuid": "87d5a499-61ea-4a6c-bccc-dfd70d9df2cc", 00:40:58.213 "name": "lvs_n_0", 00:40:58.213 "base_bdev": "44314d12-dbba-4b3a-90ff-f5556e35032d", 00:40:58.213 "total_data_clusters": 5114, 00:40:58.213 "free_clusters": 5114, 00:40:58.213 "block_size": 512, 00:40:58.213 "cluster_size": 4194304 00:40:58.213 } 00:40:58.213 ]' 00:40:58.213 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="87d5a499-61ea-4a6c-bccc-dfd70d9df2cc") .free_clusters' 00:40:58.213 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:40:58.213 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="87d5a499-61ea-4a6c-bccc-dfd70d9df2cc") .cluster_size' 00:40:58.471 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:40:58.471 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:40:58.471 16:54:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:40:58.471 20456 00:40:58.471 16:54:17 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:40:58.471 16:54:17 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87d5a499-61ea-4a6c-bccc-dfd70d9df2cc lbd_nest_0 20456 00:40:58.728 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6ce4f947-1907-4d0a-8938-16962dce8fa7 00:40:58.728 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:58.986 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:40:58.986 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6ce4f947-1907-4d0a-8938-16962dce8fa7 00:40:59.244 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:59.502 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:40:59.502 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:40:59.502 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:40:59.502 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:40:59.502 16:54:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:59.502 EAL: No free 2048 kB hugepages reported on node 1 00:41:11.690 Initializing NVMe Controllers 00:41:11.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:11.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:11.690 Initialization complete. Launching workers. 00:41:11.690 ======================================================== 00:41:11.690 Latency(us) 00:41:11.690 Device Information : IOPS MiB/s Average min max 00:41:11.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.89 0.02 22841.38 188.45 48672.64 00:41:11.690 ======================================================== 00:41:11.690 Total : 43.89 0.02 22841.38 188.45 48672.64 00:41:11.690 00:41:11.690 16:54:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:11.690 16:54:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:11.690 EAL: No free 2048 kB hugepages reported on node 1 00:41:21.649 Initializing NVMe Controllers 00:41:21.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:21.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:21.649 Initialization complete. Launching workers. 00:41:21.649 ======================================================== 00:41:21.649 Latency(us) 00:41:21.649 Device Information : IOPS MiB/s Average min max 00:41:21.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.60 10.32 12115.07 5007.24 50877.24 00:41:21.650 ======================================================== 00:41:21.650 Total : 82.60 10.32 12115.07 5007.24 50877.24 00:41:21.650 00:41:21.650 16:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:21.650 16:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:21.650 16:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:21.650 EAL: No free 2048 kB hugepages reported on node 1 00:41:31.608 Initializing NVMe Controllers 00:41:31.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:31.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:31.608 Initialization complete. Launching workers. 00:41:31.608 ======================================================== 00:41:31.608 Latency(us) 00:41:31.608 Device Information : IOPS MiB/s Average min max 00:41:31.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7454.60 3.64 4292.38 285.18 12021.49 00:41:31.608 ======================================================== 00:41:31.608 Total : 7454.60 3.64 4292.38 285.18 12021.49 00:41:31.608 00:41:31.608 16:54:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:31.608 16:54:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:31.608 EAL: No free 2048 kB hugepages reported on node 1 00:41:41.581 Initializing NVMe Controllers 00:41:41.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:41.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:41.581 Initialization complete. Launching workers. 00:41:41.581 ======================================================== 00:41:41.581 Latency(us) 00:41:41.581 Device Information : IOPS MiB/s Average min max 00:41:41.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2421.81 302.73 13213.64 1117.77 38740.88 00:41:41.581 ======================================================== 00:41:41.581 Total : 2421.81 302.73 13213.64 1117.77 38740.88 00:41:41.581 00:41:41.581 16:55:00 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:41.581 16:55:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:41.581 16:55:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:41.581 EAL: No free 2048 kB hugepages reported on node 1 00:41:51.545 Initializing NVMe Controllers 00:41:51.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:51.545 Controller IO queue size 128, less than required. 00:41:51.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:51.545 Initialization complete. Launching workers. 00:41:51.545 ======================================================== 00:41:51.545 Latency(us) 00:41:51.545 Device Information : IOPS MiB/s Average min max 00:41:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11999.23 5.86 10667.40 1634.09 24194.93 00:41:51.545 ======================================================== 00:41:51.545 Total : 11999.23 5.86 10667.40 1634.09 24194.93 00:41:51.545 00:41:51.545 16:55:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:51.545 16:55:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:51.545 EAL: No free 2048 kB hugepages reported on node 1 00:42:01.586 Initializing NVMe Controllers 00:42:01.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:01.586 Controller IO queue size 128, less than required. 00:42:01.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:01.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:01.586 Initialization complete. Launching workers. 00:42:01.586 ======================================================== 00:42:01.586 Latency(us) 00:42:01.586 Device Information : IOPS MiB/s Average min max 00:42:01.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1210.22 151.28 106012.77 14972.43 221984.15 00:42:01.586 ======================================================== 00:42:01.586 Total : 1210.22 151.28 106012.77 14972.43 221984.15 00:42:01.586 00:42:01.586 16:55:20 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:01.586 16:55:21 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ce4f947-1907-4d0a-8938-16962dce8fa7 00:42:02.519 16:55:21 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:42:02.519 16:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44314d12-dbba-4b3a-90ff-f5556e35032d 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:03.085 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:03.085 rmmod nvme_tcp 00:42:03.085 rmmod nvme_fabrics 00:42:03.085 rmmod nvme_keyring 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2899809 ']' 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2899809 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 2899809 ']' 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 2899809 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2899809 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2899809' 00:42:03.343 killing process with pid 2899809 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 2899809 00:42:03.343 16:55:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 2899809 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:05.869 16:55:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:07.769 16:55:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:07.769 00:42:07.769 real 1m36.513s 00:42:07.769 user 5m55.153s 00:42:07.769 sys 0m17.915s 00:42:07.769 16:55:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:07.769 16:55:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:42:07.769 ************************************ 00:42:07.769 END TEST nvmf_perf 00:42:07.769 ************************************ 00:42:07.769 16:55:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:42:07.769 16:55:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:42:07.769 16:55:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:07.769 16:55:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:07.769 ************************************ 00:42:07.769 START TEST nvmf_fio_host 00:42:07.769 ************************************ 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:42:07.769 * Looking for test storage... 00:42:07.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.769 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:42:07.770 16:55:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:42:10.300 Found 0000:82:00.0 (0x8086 - 0x159b) 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:10.300 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:42:10.301 Found 0000:82:00.1 (0x8086 - 0x159b) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:42:10.301 Found net devices under 0000:82:00.0: cvl_0_0 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:42:10.301 Found net devices under 0000:82:00.1: cvl_0_1 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:10.301 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:10.559 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:10.559 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:10.559 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:10.559 16:55:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:10.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:10.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:42:10.560 00:42:10.560 --- 10.0.0.2 ping statistics --- 00:42:10.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.560 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:10.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:10.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:42:10.560 00:42:10.560 --- 10.0.0.1 ping statistics --- 00:42:10.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.560 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2913436 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2913436 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 2913436 ']' 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:42:10.560 16:55:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.560 [2024-07-22 16:55:30.108132] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:42:10.560 [2024-07-22 16:55:30.108212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:10.560 EAL: No free 2048 kB hugepages reported on node 1 00:42:10.560 [2024-07-22 16:55:30.192610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:10.818 [2024-07-22 16:55:30.286665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:10.818 [2024-07-22 16:55:30.286720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:10.818 [2024-07-22 16:55:30.286736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:10.818 [2024-07-22 16:55:30.286749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:10.818 [2024-07-22 16:55:30.286760] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:10.818 [2024-07-22 16:55:30.286848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:10.818 [2024-07-22 16:55:30.286902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:10.818 [2024-07-22 16:55:30.287020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:42:10.818 [2024-07-22 16:55:30.287023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:11.751 [2024-07-22 16:55:31.281368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.751 16:55:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:42:12.009 Malloc1 00:42:12.009 16:55:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:12.267 16:55:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:12.524 16:55:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:12.782 [2024-07-22 16:55:32.320876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:12.782 16:55:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:42:13.039 16:55:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:13.297 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:42:13.297 fio-3.35 00:42:13.297 Starting 1 thread 00:42:13.297 EAL: No free 2048 kB hugepages reported on node 1 00:42:15.821 00:42:15.821 test: (groupid=0, jobs=1): err= 0: pid=2913804: Mon Jul 22 16:55:35 2024 00:42:15.821 read: IOPS=9180, BW=35.9MiB/s (37.6MB/s)(72.0MiB/2007msec) 00:42:15.821 slat (usec): min=2, max=132, avg= 2.90, stdev= 2.21 00:42:15.821 clat (usec): min=2507, max=13138, avg=7667.49, stdev=599.13 00:42:15.821 lat (usec): min=2527, max=13141, avg=7670.39, stdev=599.02 00:42:15.821 clat percentiles (usec): 00:42:15.821 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:42:15.821 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:42:15.821 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:42:15.821 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[10290], 99.95th=[11731], 00:42:15.821 | 99.99th=[13042] 00:42:15.821 bw ( KiB/s): min=35728, max=37240, per=99.97%, avg=36710.00, stdev=675.53, samples=4 00:42:15.821 iops : min= 8932, max= 9310, avg=9177.50, stdev=168.88, samples=4 00:42:15.821 write: IOPS=9186, BW=35.9MiB/s (37.6MB/s)(72.0MiB/2007msec); 0 zone resets 00:42:15.821 slat (usec): min=2, max=110, avg= 3.01, stdev= 2.08 00:42:15.821 clat (usec): min=1131, max=11812, avg=6228.69, stdev=516.55 00:42:15.821 lat (usec): min=1138, max=11815, avg=6231.70, stdev=516.50 00:42:15.821 clat percentiles (usec): 00:42:15.821 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:42:15.821 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:42:15.821 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 6980], 00:42:15.821 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[10159], 99.95th=[11338], 00:42:15.821 | 99.99th=[11731] 00:42:15.821 bw ( KiB/s): min=36560, max=37016, per=100.00%, avg=36762.00, stdev=215.59, samples=4 00:42:15.821 iops : min= 9140, max= 9254, avg=9190.50, stdev=53.90, samples=4 00:42:15.821 lat (msec) : 2=0.03%, 4=0.11%, 10=99.73%, 20=0.13% 00:42:15.821 cpu : usr=64.86%, sys=31.90%, ctx=66, majf=0, minf=35 00:42:15.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:42:15.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:15.821 issued rwts: total=18425,18438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:15.821 00:42:15.821 Run status group 0 (all jobs): 00:42:15.821 READ: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=72.0MiB (75.5MB), run=2007-2007msec 00:42:15.821 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=72.0MiB (75.5MB), run=2007-2007msec 00:42:15.821 16:55:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:42:15.821 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:42:15.821 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:42:15.822 16:55:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:42:15.822 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:42:15.822 fio-3.35 00:42:15.822 Starting 1 thread 00:42:15.822 EAL: No free 2048 kB hugepages reported on node 1 00:42:18.350 00:42:18.350 test: (groupid=0, jobs=1): err= 0: pid=2914254: Mon Jul 22 16:55:37 2024 00:42:18.350 read: IOPS=8181, BW=128MiB/s (134MB/s)(257MiB/2011msec) 00:42:18.350 slat (usec): min=2, max=143, avg= 4.23, stdev= 2.54 00:42:18.350 clat (usec): min=3177, max=18129, avg=9068.20, stdev=2119.98 00:42:18.350 lat (usec): min=3181, max=18132, avg=9072.43, stdev=2120.10 00:42:18.350 clat percentiles (usec): 00:42:18.350 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:42:18.350 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:42:18.350 | 70.00th=[10290], 80.00th=[11076], 90.00th=[11863], 95.00th=[12649], 00:42:18.350 | 99.00th=[13698], 99.50th=[14746], 99.90th=[17171], 99.95th=[17695], 00:42:18.350 | 99.99th=[17957] 00:42:18.350 bw ( KiB/s): min=60512, max=74784, per=51.22%, avg=67048.00, stdev=7527.93, samples=4 00:42:18.350 iops : min= 3782, max= 4674, avg=4190.50, stdev=470.50, samples=4 00:42:18.350 write: IOPS=4956, BW=77.4MiB/s (81.2MB/s)(137MiB/1770msec); 0 zone resets 00:42:18.350 slat (usec): min=30, max=204, avg=38.41, stdev= 6.72 00:42:18.350 clat (usec): min=4171, max=17256, avg=11456.16, stdev=1957.16 00:42:18.350 lat (usec): min=4205, max=17293, avg=11494.57, stdev=1957.16 00:42:18.350 clat percentiles (usec): 00:42:18.350 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 00:42:18.350 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:42:18.350 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14222], 95.00th=[15008], 00:42:18.350 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:42:18.350 | 99.99th=[17171] 00:42:18.350 bw ( KiB/s): min=62016, max=78400, per=88.04%, avg=69816.00, stdev=8034.76, samples=4 00:42:18.350 iops : min= 3876, max= 4900, avg=4363.50, stdev=502.17, samples=4 00:42:18.350 lat (msec) : 4=0.08%, 10=51.39%, 20=48.53% 00:42:18.350 cpu : usr=79.70%, sys=17.51%, ctx=147, majf=0, minf=67 00:42:18.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:42:18.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:18.350 issued rwts: total=16452,8773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:18.350 00:42:18.350 Run status group 0 (all jobs): 00:42:18.351 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (270MB), run=2011-2011msec 00:42:18.351 WRITE: bw=77.4MiB/s (81.2MB/s), 77.4MiB/s-77.4MiB/s (81.2MB/s-81.2MB/s), io=137MiB (144MB), run=1770-1770msec 00:42:18.351 16:55:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:81:00.0 00:42:18.608 16:55:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 -i 10.0.0.2 00:42:21.886 Nvme0n1 00:42:21.886 16:55:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=0fcff09b-77b4-4ae4-bae2-e53fa1ca6297 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 0fcff09b-77b4-4ae4-bae2-e53fa1ca6297 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=0fcff09b-77b4-4ae4-bae2-e53fa1ca6297 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:42:27.142 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:27.399 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:42:27.399 { 00:42:27.399 "uuid": "0fcff09b-77b4-4ae4-bae2-e53fa1ca6297", 00:42:27.399 "name": "lvs_0", 00:42:27.399 "base_bdev": "Nvme0n1", 00:42:27.399 "total_data_clusters": 1862, 00:42:27.399 "free_clusters": 1862, 00:42:27.399 "block_size": 512, 00:42:27.399 "cluster_size": 1073741824 00:42:27.399 } 00:42:27.399 ]' 00:42:27.399 16:55:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="0fcff09b-77b4-4ae4-bae2-e53fa1ca6297") .free_clusters' 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1862 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="0fcff09b-77b4-4ae4-bae2-e53fa1ca6297") .cluster_size' 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1906688 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1906688 00:42:27.399 1906688 00:42:27.399 16:55:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:42:27.964 908deedb-9e48-4f36-92c2-c0167a06917f 00:42:28.222 16:55:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:42:28.222 16:55:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:42:28.479 16:55:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:42:28.737 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:42:28.995 16:55:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:28.995 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:42:28.995 fio-3.35 00:42:28.995 Starting 1 thread 00:42:29.252 EAL: No free 2048 kB hugepages reported on node 1 00:42:31.777 00:42:31.777 test: (groupid=0, jobs=1): err= 0: pid=2915805: Mon Jul 22 16:55:51 2024 00:42:31.777 read: IOPS=5642, BW=22.0MiB/s (23.1MB/s)(44.3MiB/2008msec) 00:42:31.777 slat (usec): min=2, max=144, avg= 3.05, stdev= 2.40 00:42:31.777 clat (usec): min=962, max=334674, avg=12347.71, stdev=24189.69 00:42:31.777 lat (usec): min=965, max=334679, avg=12350.76, stdev=24189.87 00:42:31.777 clat percentiles (msec): 00:42:31.777 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:42:31.777 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:42:31.777 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:42:31.777 | 99.00th=[ 13], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 334], 00:42:31.777 | 99.99th=[ 334] 00:42:31.777 bw ( KiB/s): min= 8574, max=27696, per=99.84%, avg=22535.50, stdev=9317.05, samples=4 00:42:31.777 iops : min= 2143, max= 6924, avg=5633.75, stdev=2329.51, samples=4 00:42:31.777 write: IOPS=5612, BW=21.9MiB/s (23.0MB/s)(44.0MiB/2008msec); 0 zone resets 00:42:31.777 slat (usec): min=2, max=110, avg= 3.17, stdev= 2.02 00:42:31.777 clat (usec): min=392, max=332810, avg=10187.03, stdev=23697.95 00:42:31.777 lat (usec): min=396, max=332816, avg=10190.20, stdev=23698.07 00:42:31.777 clat percentiles (msec): 00:42:31.777 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:42:31.777 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:42:31.777 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:42:31.777 | 99.00th=[ 11], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 334], 00:42:31.777 | 99.99th=[ 334] 00:42:31.777 bw ( KiB/s): min= 9085, max=26944, per=99.79%, avg=22401.25, stdev=8878.72, samples=4 00:42:31.777 iops : min= 2271, max= 6736, avg=5600.25, stdev=2219.80, samples=4 00:42:31.777 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:42:31.778 lat (msec) : 2=0.03%, 4=0.13%, 10=61.58%, 20=37.66%, 50=0.01% 00:42:31.778 lat (msec) : 500=0.57% 00:42:31.778 cpu : usr=63.73%, sys=34.33%, ctx=81, majf=0, minf=35 00:42:31.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:42:31.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:31.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:31.778 issued rwts: total=11331,11269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:31.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:31.778 00:42:31.778 Run status group 0 (all jobs): 00:42:31.778 READ: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.4MB), run=2008-2008msec 00:42:31.778 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.0MiB (46.2MB), run=2008-2008msec 00:42:31.778 16:55:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:31.778 16:55:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=74629b7d-8497-4a48-884c-c9bfc082d9c8 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 74629b7d-8497-4a48-884c-c9bfc082d9c8 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=74629b7d-8497-4a48-884c-c9bfc082d9c8 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:42:33.148 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:42:33.405 { 00:42:33.405 "uuid": "0fcff09b-77b4-4ae4-bae2-e53fa1ca6297", 00:42:33.405 "name": "lvs_0", 00:42:33.405 "base_bdev": "Nvme0n1", 00:42:33.405 "total_data_clusters": 1862, 00:42:33.405 "free_clusters": 0, 00:42:33.405 "block_size": 512, 00:42:33.405 "cluster_size": 1073741824 00:42:33.405 }, 00:42:33.405 { 00:42:33.405 "uuid": "74629b7d-8497-4a48-884c-c9bfc082d9c8", 00:42:33.405 "name": "lvs_n_0", 00:42:33.405 "base_bdev": "908deedb-9e48-4f36-92c2-c0167a06917f", 00:42:33.405 "total_data_clusters": 476206, 00:42:33.405 "free_clusters": 476206, 00:42:33.405 "block_size": 512, 00:42:33.405 "cluster_size": 4194304 00:42:33.405 } 00:42:33.405 ]' 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="74629b7d-8497-4a48-884c-c9bfc082d9c8") .free_clusters' 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=476206 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="74629b7d-8497-4a48-884c-c9bfc082d9c8") .cluster_size' 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1904824 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1904824 00:42:33.405 1904824 00:42:33.405 16:55:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:42:34.775 50c605a0-857c-4921-b0f5-3148210d6462 00:42:34.775 16:55:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:42:34.775 16:55:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:42:35.032 16:55:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:42:35.289 16:55:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:42:35.546 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:42:35.546 fio-3.35 00:42:35.546 Starting 1 thread 00:42:35.546 EAL: No free 2048 kB hugepages reported on node 1 00:42:38.073 00:42:38.073 test: (groupid=0, jobs=1): err= 0: pid=2916674: Mon Jul 22 16:55:57 2024 00:42:38.073 read: IOPS=5900, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2010msec) 00:42:38.073 slat (usec): min=2, max=160, avg= 3.04, stdev= 2.86 00:42:38.073 clat (usec): min=4447, max=19482, avg=11895.39, stdev=1006.74 00:42:38.073 lat (usec): min=4452, max=19485, avg=11898.43, stdev=1006.64 00:42:38.073 clat percentiles (usec): 00:42:38.073 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:42:38.073 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:42:38.073 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13435], 00:42:38.073 | 99.00th=[14091], 99.50th=[14353], 99.90th=[17171], 99.95th=[17695], 00:42:38.073 | 99.99th=[19530] 00:42:38.073 bw ( KiB/s): min=21952, max=24240, per=100.00%, avg=23602.00, stdev=1102.15, samples=4 00:42:38.073 iops : min= 5488, max= 6060, avg=5900.50, stdev=275.54, samples=4 00:42:38.073 write: IOPS=5898, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2010msec); 0 zone resets 00:42:38.073 slat (usec): min=2, max=123, avg= 3.17, stdev= 2.57 00:42:38.073 clat (usec): min=2159, max=18770, avg=9570.89, stdev=898.56 00:42:38.073 lat (usec): min=2168, max=18772, avg=9574.06, stdev=898.51 00:42:38.073 clat percentiles (usec): 00:42:38.073 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:42:38.073 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:42:38.073 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:42:38.073 | 99.00th=[11469], 99.50th=[11863], 99.90th=[17433], 99.95th=[17695], 00:42:38.073 | 99.99th=[18744] 00:42:38.073 bw ( KiB/s): min=23000, max=23872, per=99.92%, avg=23574.00, stdev=390.90, samples=4 00:42:38.073 iops : min= 5750, max= 5968, avg=5893.50, stdev=97.73, samples=4 00:42:38.073 lat (msec) : 4=0.05%, 10=37.02%, 20=62.94% 00:42:38.073 cpu : usr=65.46%, sys=32.45%, ctx=50, majf=0, minf=35 00:42:38.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:42:38.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:38.073 issued rwts: total=11860,11856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:38.073 00:42:38.073 Run status group 0 (all jobs): 00:42:38.073 READ: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:42:38.073 WRITE: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:42:38.073 16:55:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:42:38.073 16:55:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:42:38.073 16:55:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:42:46.173 16:56:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:42:46.173 16:56:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:42:51.434 16:56:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:42:51.691 16:56:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:55.044 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:55.045 rmmod nvme_tcp 00:42:55.045 rmmod nvme_fabrics 00:42:55.045 rmmod nvme_keyring 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2913436 ']' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 2913436 ']' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2913436' 00:42:55.045 killing process with pid 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 2913436 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:55.045 16:56:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.581 16:56:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:57.581 00:42:57.581 real 0m49.441s 00:42:57.581 user 3m10.048s 00:42:57.581 sys 0m7.204s 00:42:57.581 16:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:57.581 16:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:42:57.581 ************************************ 00:42:57.581 END TEST nvmf_fio_host 00:42:57.581 ************************************ 00:42:57.581 16:56:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:42:57.581 16:56:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:42:57.581 16:56:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:57.581 16:56:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.581 ************************************ 00:42:57.581 START TEST nvmf_failover 00:42:57.581 ************************************ 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:42:57.581 * Looking for test storage... 00:42:57.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:42:57.581 16:56:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:43:00.112 Found 0000:82:00.0 (0x8086 - 0x159b) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:43:00.112 Found 0000:82:00.1 (0x8086 - 0x159b) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:43:00.112 Found net devices under 0000:82:00.0: cvl_0_0 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:43:00.112 Found net devices under 0000:82:00.1: cvl_0_1 00:43:00.112 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:00.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:00.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:43:00.113 00:43:00.113 --- 10.0.0.2 ping statistics --- 00:43:00.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.113 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:00.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:00.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:43:00.113 00:43:00.113 --- 10.0.0.1 ping statistics --- 00:43:00.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.113 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2921123 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2921123 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2921123 ']' 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:00.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 [2024-07-22 16:56:19.400787] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:00.113 [2024-07-22 16:56:19.400860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:00.113 EAL: No free 2048 kB hugepages reported on node 1 00:43:00.113 [2024-07-22 16:56:19.479259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:00.113 [2024-07-22 16:56:19.569300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:00.113 [2024-07-22 16:56:19.569354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:00.113 [2024-07-22 16:56:19.569383] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:00.113 [2024-07-22 16:56:19.569395] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:00.113 [2024-07-22 16:56:19.569405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:00.113 [2024-07-22 16:56:19.569548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:00.113 [2024-07-22 16:56:19.569614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:43:00.113 [2024-07-22 16:56:19.569616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:00.113 16:56:19 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:00.371 [2024-07-22 16:56:19.919794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:00.371 16:56:19 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:00.629 Malloc0 00:43:00.629 16:56:20 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:00.886 16:56:20 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:01.144 16:56:20 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:01.401 [2024-07-22 16:56:20.939787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:01.402 16:56:20 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:01.659 [2024-07-22 16:56:21.176499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:01.659 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:43:01.917 [2024-07-22 16:56:21.421316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2921407 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2921407 /var/tmp/bdevperf.sock 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2921407 ']' 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:01.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:01.917 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:02.175 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:02.175 16:56:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:43:02.175 16:56:21 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:02.739 NVMe0n1 00:43:02.739 16:56:22 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:02.739 00:43:02.996 16:56:22 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2921539 00:43:02.996 16:56:22 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:43:02.996 16:56:22 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:03.930 16:56:23 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:04.189 [2024-07-22 16:56:23.653927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.189 [2024-07-22 16:56:23.654699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 [2024-07-22 16:56:23.654853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382090 is same with the state(5) to be set 00:43:04.190 16:56:23 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:43:07.469 16:56:26 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:07.726 00:43:07.726 16:56:27 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:07.984 [2024-07-22 16:56:27.433341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 [2024-07-22 16:56:27.433477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:43:07.984 16:56:27 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:43:11.263 16:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:11.263 [2024-07-22 16:56:30.731335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:11.263 16:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:43:12.197 16:56:31 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:43:12.455 [2024-07-22 16:56:32.012276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 [2024-07-22 16:56:32.012824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383980 is same with the state(5) to be set 00:43:12.455 16:56:32 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2921539 00:43:19.017 0 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2921407 ']' 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2921407' 00:43:19.017 killing process with pid 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2921407 00:43:19.017 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:43:19.017 [2024-07-22 16:56:21.477559] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:19.017 [2024-07-22 16:56:21.477642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921407 ] 00:43:19.017 EAL: No free 2048 kB hugepages reported on node 1 00:43:19.017 [2024-07-22 16:56:21.550929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:19.017 [2024-07-22 16:56:21.637387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.017 Running I/O for 15 seconds... 00:43:19.017 [2024-07-22 16:56:23.656300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.017 [2024-07-22 16:56:23.656344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.017 [2024-07-22 16:56:23.656370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.017 [2024-07-22 16:56:23.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.017 [2024-07-22 16:56:23.656404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.017 [2024-07-22 16:56:23.656434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.017 [2024-07-22 16:56:23.656450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.656975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.656992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.018 [2024-07-22 16:56:23.657588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.018 [2024-07-22 16:56:23.657602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.657984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.657999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.019 [2024-07-22 16:56:23.658233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.019 [2024-07-22 16:56:23.658722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.019 [2024-07-22 16:56:23.658737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.658975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.658991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.020 [2024-07-22 16:56:23.659616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.020 [2024-07-22 16:56:23.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:43:19.020 [2024-07-22 16:56:23.659684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.020 [2024-07-22 16:56:23.659714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.020 [2024-07-22 16:56:23.659726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:43:19.020 [2024-07-22 16:56:23.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.020 [2024-07-22 16:56:23.659762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.020 [2024-07-22 16:56:23.659773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:43:19.020 [2024-07-22 16:56:23.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.020 [2024-07-22 16:56:23.659809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.020 [2024-07-22 16:56:23.659821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83720 len:8 PRP1 0x0 PRP2 0x0 00:43:19.020 [2024-07-22 16:56:23.659834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.020 [2024-07-22 16:56:23.659846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.020 [2024-07-22 16:56:23.659857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.020 [2024-07-22 16:56:23.659868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:43:19.020 [2024-07-22 16:56:23.659880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.659893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.659904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.659914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.659939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.659950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.659961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83744 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.659981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.659994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83752 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83760 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83776 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83784 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83800 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83808 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.021 [2024-07-22 16:56:23.660400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.021 [2024-07-22 16:56:23.660414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83816 len:8 PRP1 0x0 PRP2 0x0 00:43:19.021 [2024-07-22 16:56:23.660432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660490] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce2ef0 was disconnected and freed. reset controller. 00:43:19.021 [2024-07-22 16:56:23.660508] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:43:19.021 [2024-07-22 16:56:23.660541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.021 [2024-07-22 16:56:23.660559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.021 [2024-07-22 16:56:23.660588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.021 [2024-07-22 16:56:23.660620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.021 [2024-07-22 16:56:23.660648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:23.660661] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:19.021 [2024-07-22 16:56:23.663903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:19.021 [2024-07-22 16:56:23.663941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc3740 (9): Bad file descriptor 00:43:19.021 [2024-07-22 16:56:23.704180] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:19.021 [2024-07-22 16:56:27.435087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.021 [2024-07-22 16:56:27.435131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.021 [2024-07-22 16:56:27.435534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.021 [2024-07-22 16:56:27.435562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.435971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.435988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.022 [2024-07-22 16:56:27.436518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.022 [2024-07-22 16:56:27.436532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.436998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.023 [2024-07-22 16:56:27.437012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.023 [2024-07-22 16:56:27.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.023 [2024-07-22 16:56:27.437676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.023 [2024-07-22 16:56:27.437690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.437977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.024 [2024-07-22 16:56:27.437993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:43:19.024 [2024-07-22 16:56:27.438840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.024 [2024-07-22 16:56:27.438853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.024 [2024-07-22 16:56:27.438864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.024 [2024-07-22 16:56:27.438875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.438888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.438901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.438912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.438923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.438935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.438955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.438973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.438986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.438998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.025 [2024-07-22 16:56:27.439618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.025 [2024-07-22 16:56:27.439629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:43:19.025 [2024-07-22 16:56:27.439641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439698] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce4f20 was disconnected and freed. reset controller. 00:43:19.025 [2024-07-22 16:56:27.439715] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:43:19.025 [2024-07-22 16:56:27.439750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.025 [2024-07-22 16:56:27.439769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.025 [2024-07-22 16:56:27.439797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.025 [2024-07-22 16:56:27.439832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.025 [2024-07-22 16:56:27.439859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:27.439871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:19.025 [2024-07-22 16:56:27.443109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:19.025 [2024-07-22 16:56:27.443148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc3740 (9): Bad file descriptor 00:43:19.025 [2024-07-22 16:56:27.484900] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:19.025 [2024-07-22 16:56:32.013794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.013837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.013873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.013913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.013928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.013943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.013961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.014016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.014032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.025 [2024-07-22 16:56:32.014061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.025 [2024-07-22 16:56:32.014075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.014950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.014993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.015010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.015026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.015040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.026 [2024-07-22 16:56:32.015054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.015070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.026 [2024-07-22 16:56:32.015088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.026 [2024-07-22 16:56:32.015104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.026 [2024-07-22 16:56:32.015118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.015983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.015998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.027 [2024-07-22 16:56:32.016342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.027 [2024-07-22 16:56:32.016357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:19.028 [2024-07-22 16:56:32.016520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45672 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45688 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45696 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.016935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.016947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.016980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.016992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45752 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45760 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45768 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45776 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45784 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45792 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45800 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45808 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.028 [2024-07-22 16:56:32.017529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.028 [2024-07-22 16:56:32.017540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45816 len:8 PRP1 0x0 PRP2 0x0 00:43:19.028 [2024-07-22 16:56:32.017552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.028 [2024-07-22 16:56:32.017565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45824 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45832 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45848 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45856 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45864 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45872 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.017934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45880 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.017977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.017989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45888 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45904 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45920 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45928 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45936 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45944 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45952 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45960 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45968 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:19.029 [2024-07-22 16:56:32.018581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:19.029 [2024-07-22 16:56:32.018592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:43:19.029 [2024-07-22 16:56:32.018609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018668] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce4f20 was disconnected and freed. reset controller. 00:43:19.029 [2024-07-22 16:56:32.018686] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:43:19.029 [2024-07-22 16:56:32.018723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.029 [2024-07-22 16:56:32.018741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.029 [2024-07-22 16:56:32.018770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.029 [2024-07-22 16:56:32.018785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.029 [2024-07-22 16:56:32.018806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.030 [2024-07-22 16:56:32.018820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:19.030 [2024-07-22 16:56:32.018833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:19.030 [2024-07-22 16:56:32.018846] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:19.030 [2024-07-22 16:56:32.018884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc3740 (9): Bad file descriptor 00:43:19.030 [2024-07-22 16:56:32.022145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:19.030 [2024-07-22 16:56:32.091477] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:19.030 00:43:19.030 Latency(us) 00:43:19.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:19.030 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:19.030 Verification LBA range: start 0x0 length 0x4000 00:43:19.030 NVMe0n1 : 15.01 8888.86 34.72 389.33 0.00 13770.16 767.62 15049.01 00:43:19.030 =================================================================================================================== 00:43:19.030 Total : 8888.86 34.72 389.33 0.00 13770.16 767.62 15049.01 00:43:19.030 Received shutdown signal, test time was about 15.000000 seconds 00:43:19.030 00:43:19.030 Latency(us) 00:43:19.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:19.030 =================================================================================================================== 00:43:19.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2923374 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2923374 /var/tmp/bdevperf.sock 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2923374 ']' 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:19.030 16:56:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:19.030 16:56:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:19.030 16:56:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:43:19.030 16:56:38 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:19.030 [2024-07-22 16:56:38.320077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:19.030 16:56:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:43:19.030 [2024-07-22 16:56:38.560733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:43:19.030 16:56:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:19.594 NVMe0n1 00:43:19.594 16:56:39 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:19.851 00:43:19.851 16:56:39 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:20.416 00:43:20.416 16:56:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:20.417 16:56:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:43:20.674 16:56:40 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:20.931 16:56:40 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:43:24.213 16:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:24.213 16:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:43:24.213 16:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2924045 00:43:24.213 16:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:24.213 16:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2924045 00:43:25.146 0 00:43:25.146 16:56:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:43:25.146 [2024-07-22 16:56:37.842413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:25.146 [2024-07-22 16:56:37.842505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923374 ] 00:43:25.146 EAL: No free 2048 kB hugepages reported on node 1 00:43:25.146 [2024-07-22 16:56:37.911549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.146 [2024-07-22 16:56:37.994534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:25.146 [2024-07-22 16:56:40.304606] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:43:25.146 [2024-07-22 16:56:40.304722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:25.146 [2024-07-22 16:56:40.304748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:25.146 [2024-07-22 16:56:40.304768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:25.146 [2024-07-22 16:56:40.304784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:25.146 [2024-07-22 16:56:40.304799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:25.146 [2024-07-22 16:56:40.304814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:25.146 [2024-07-22 16:56:40.304828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:25.147 [2024-07-22 16:56:40.304842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:25.147 [2024-07-22 16:56:40.304868] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:25.147 [2024-07-22 16:56:40.304920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:25.147 [2024-07-22 16:56:40.304978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e90740 (9): Bad file descriptor 00:43:25.147 [2024-07-22 16:56:40.309935] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:25.147 Running I/O for 1 seconds... 00:43:25.147 00:43:25.147 Latency(us) 00:43:25.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:25.147 Verification LBA range: start 0x0 length 0x4000 00:43:25.147 NVMe0n1 : 1.01 8900.03 34.77 0.00 0.00 14321.11 3155.44 11602.30 00:43:25.147 =================================================================================================================== 00:43:25.147 Total : 8900.03 34.77 0.00 0.00 14321.11 3155.44 11602.30 00:43:25.147 16:56:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:25.147 16:56:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:43:25.404 16:56:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:25.662 16:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:25.662 16:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:43:25.919 16:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:26.176 16:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2923374 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2923374 ']' 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2923374 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:29.455 16:56:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2923374 00:43:29.455 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:29.455 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:29.455 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2923374' 00:43:29.455 killing process with pid 2923374 00:43:29.456 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2923374 00:43:29.456 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2923374 00:43:29.713 16:56:49 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:43:29.713 16:56:49 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:29.971 rmmod nvme_tcp 00:43:29.971 rmmod nvme_fabrics 00:43:29.971 rmmod nvme_keyring 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2921123 ']' 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2921123 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2921123 ']' 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2921123 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2921123 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2921123' 00:43:29.971 killing process with pid 2921123 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2921123 00:43:29.971 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2921123 00:43:30.229 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:30.230 16:56:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:32.762 16:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:32.762 00:43:32.762 real 0m35.117s 00:43:32.762 user 2m2.188s 00:43:32.762 sys 0m6.453s 00:43:32.762 16:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:32.762 16:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:43:32.762 ************************************ 00:43:32.762 END TEST nvmf_failover 00:43:32.762 ************************************ 00:43:32.762 16:56:51 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:43:32.762 16:56:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:43:32.762 16:56:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:32.762 16:56:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:32.762 ************************************ 00:43:32.762 START TEST nvmf_host_discovery 00:43:32.762 ************************************ 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:43:32.762 * Looking for test storage... 00:43:32.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:43:32.762 16:56:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.292 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:43:35.293 Found 0000:82:00.0 (0x8086 - 0x159b) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:43:35.293 Found 0000:82:00.1 (0x8086 - 0x159b) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:43:35.293 Found net devices under 0000:82:00.0: cvl_0_0 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:43:35.293 Found net devices under 0000:82:00.1: cvl_0_1 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:35.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:35.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:43:35.293 00:43:35.293 --- 10.0.0.2 ping statistics --- 00:43:35.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:35.293 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:35.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:35.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:43:35.293 00:43:35.293 --- 10.0.0.1 ping statistics --- 00:43:35.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:35.293 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2926935 00:43:35.293 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2926935 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2926935 ']' 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:35.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:35.294 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.294 [2024-07-22 16:56:54.679291] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:35.294 [2024-07-22 16:56:54.679391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:35.294 EAL: No free 2048 kB hugepages reported on node 1 00:43:35.294 [2024-07-22 16:56:54.759693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.294 [2024-07-22 16:56:54.849322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:35.294 [2024-07-22 16:56:54.849385] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:35.294 [2024-07-22 16:56:54.849402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:35.294 [2024-07-22 16:56:54.849416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:35.294 [2024-07-22 16:56:54.849428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:35.294 [2024-07-22 16:56:54.849465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 [2024-07-22 16:56:54.990630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.552 16:56:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 [2024-07-22 16:56:54.998818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 null0 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 null1 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2927075 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2927075 /tmp/host.sock 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2927075 ']' 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:43:35.552 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:35.552 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.552 [2024-07-22 16:56:55.070337] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:35.552 [2024-07-22 16:56:55.070416] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927075 ] 00:43:35.552 EAL: No free 2048 kB hugepages reported on node 1 00:43:35.552 [2024-07-22 16:56:55.141361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.810 [2024-07-22 16:56:55.232876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.810 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.811 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 [2024-07-22 16:56:55.648551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:36.069 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:43:36.327 16:56:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:43:36.891 [2024-07-22 16:56:56.408785] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:43:36.892 [2024-07-22 16:56:56.408814] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:43:36.892 [2024-07-22 16:56:56.408839] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:36.892 [2024-07-22 16:56:56.497131] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:43:37.148 [2024-07-22 16:56:56.681441] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:43:37.148 [2024-07-22 16:56:56.681467] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:37.406 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.407 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:37.407 16:56:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.407 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:37.407 16:56:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.407 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.665 [2024-07-22 16:56:57.072669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:37.665 [2024-07-22 16:56:57.073524] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:37.665 [2024-07-22 16:56:57.073561] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.665 [2024-07-22 16:56:57.159813] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:43:37.665 16:56:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:43:37.923 [2024-07-22 16:56:57.465216] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:43:37.923 [2024-07-22 16:56:57.465243] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:43:37.923 [2024-07-22 16:56:57.465253] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:38.857 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.858 [2024-07-22 16:56:58.308988] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:38.858 [2024-07-22 16:56:58.309039] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:38.858 [2024-07-22 16:56:58.312539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.858 id:0 cdw10:00000000 cdw11:00000000 00:43:38.858 [2024-07-22 16:56:58.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.858 [2024-07-22 16:56:58.312632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:38.858 [2024-07-22 16:56:58.312646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.858 [2024-07-22 16:56:58.312659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:38.858 [2024-07-22 16:56:58.312672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.858 [2024-07-22 16:56:58.312686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:38.858 [2024-07-22 16:56:58.312698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.858 [2024-07-22 16:56:58.312711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:38.858 [2024-07-22 16:56:58.322524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.858 [2024-07-22 16:56:58.332564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.858 [2024-07-22 16:56:58.332822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.858 [2024-07-22 16:56:58.332850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.858 [2024-07-22 16:56:58.332866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 [2024-07-22 16:56:58.332888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 [2024-07-22 16:56:58.332909] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.858 [2024-07-22 16:56:58.332923] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.858 [2024-07-22 16:56:58.332955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.858 [2024-07-22 16:56:58.332987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.858 [2024-07-22 16:56:58.342655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.858 [2024-07-22 16:56:58.342833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.858 [2024-07-22 16:56:58.342860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.858 [2024-07-22 16:56:58.342875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 [2024-07-22 16:56:58.342895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 [2024-07-22 16:56:58.342915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.858 [2024-07-22 16:56:58.342928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.858 [2024-07-22 16:56:58.342941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.858 [2024-07-22 16:56:58.342985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.858 [2024-07-22 16:56:58.352738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.858 [2024-07-22 16:56:58.352977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.858 [2024-07-22 16:56:58.353006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.858 [2024-07-22 16:56:58.353022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 [2024-07-22 16:56:58.353044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 [2024-07-22 16:56:58.353064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.858 [2024-07-22 16:56:58.353078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.858 [2024-07-22 16:56:58.353092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.858 [2024-07-22 16:56:58.353137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:38.858 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:38.858 [2024-07-22 16:56:58.362823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.858 [2024-07-22 16:56:58.363045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.858 [2024-07-22 16:56:58.363078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.858 [2024-07-22 16:56:58.363095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 [2024-07-22 16:56:58.363118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 [2024-07-22 16:56:58.363151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.858 [2024-07-22 16:56:58.363169] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.858 [2024-07-22 16:56:58.363183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.858 [2024-07-22 16:56:58.363202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.858 [2024-07-22 16:56:58.372909] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.858 [2024-07-22 16:56:58.373119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.858 [2024-07-22 16:56:58.373147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.858 [2024-07-22 16:56:58.373164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.858 [2024-07-22 16:56:58.373186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.858 [2024-07-22 16:56:58.373219] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.858 [2024-07-22 16:56:58.373237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.858 [2024-07-22 16:56:58.373251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.858 [2024-07-22 16:56:58.373270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.858 [2024-07-22 16:56:58.383002] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.859 [2024-07-22 16:56:58.383209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.859 [2024-07-22 16:56:58.383237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.859 [2024-07-22 16:56:58.383253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.859 [2024-07-22 16:56:58.383289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.859 [2024-07-22 16:56:58.383322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.859 [2024-07-22 16:56:58.383339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.859 [2024-07-22 16:56:58.383353] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.859 [2024-07-22 16:56:58.383371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.859 [2024-07-22 16:56:58.393074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:38.859 [2024-07-22 16:56:58.393277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:38.859 [2024-07-22 16:56:58.393318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbedda0 with addr=10.0.0.2, port=4420 00:43:38.859 [2024-07-22 16:56:58.393333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedda0 is same with the state(5) to be set 00:43:38.859 [2024-07-22 16:56:58.393353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbedda0 (9): Bad file descriptor 00:43:38.859 [2024-07-22 16:56:58.393390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:38.859 [2024-07-22 16:56:58.393407] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:38.859 [2024-07-22 16:56:58.393420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:38.859 [2024-07-22 16:56:58.393438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:38.859 [2024-07-22 16:56:58.396391] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:43:38.859 [2024-07-22 16:56:58.396417] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:43:38.859 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:39.117 16:56:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.050 [2024-07-22 16:56:59.677131] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:43:40.050 [2024-07-22 16:56:59.677161] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:43:40.050 [2024-07-22 16:56:59.677183] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:40.311 [2024-07-22 16:56:59.763479] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:43:40.600 [2024-07-22 16:57:00.075532] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:43:40.600 [2024-07-22 16:57:00.075587] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.600 request: 00:43:40.600 { 00:43:40.600 "name": "nvme", 00:43:40.600 "trtype": "tcp", 00:43:40.600 "traddr": "10.0.0.2", 00:43:40.600 "hostnqn": "nqn.2021-12.io.spdk:test", 00:43:40.600 "adrfam": "ipv4", 00:43:40.600 "trsvcid": "8009", 00:43:40.600 "wait_for_attach": true, 00:43:40.600 "method": "bdev_nvme_start_discovery", 00:43:40.600 "req_id": 1 00:43:40.600 } 00:43:40.600 Got JSON-RPC error response 00:43:40.600 response: 00:43:40.600 { 00:43:40.600 "code": -17, 00:43:40.600 "message": "File exists" 00:43:40.600 } 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.600 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.601 request: 00:43:40.601 { 00:43:40.601 "name": "nvme_second", 00:43:40.601 "trtype": "tcp", 00:43:40.601 "traddr": "10.0.0.2", 00:43:40.601 "hostnqn": "nqn.2021-12.io.spdk:test", 00:43:40.601 "adrfam": "ipv4", 00:43:40.601 "trsvcid": "8009", 00:43:40.601 "wait_for_attach": true, 00:43:40.601 "method": "bdev_nvme_start_discovery", 00:43:40.601 "req_id": 1 00:43:40.601 } 00:43:40.601 Got JSON-RPC error response 00:43:40.601 response: 00:43:40.601 { 00:43:40.601 "code": -17, 00:43:40.601 "message": "File exists" 00:43:40.601 } 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:43:40.601 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:43:40.894 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:40.895 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:43:40.895 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.895 16:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:41.829 [2024-07-22 16:57:01.266984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:41.829 [2024-07-22 16:57:01.267069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1f5a0 with addr=10.0.0.2, port=8010 00:43:41.829 [2024-07-22 16:57:01.267102] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:43:41.829 [2024-07-22 16:57:01.267117] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:41.829 [2024-07-22 16:57:01.267130] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:43:42.762 [2024-07-22 16:57:02.269381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.762 [2024-07-22 16:57:02.269418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2b2e0 with addr=10.0.0.2, port=8010 00:43:42.762 [2024-07-22 16:57:02.269440] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:43:42.762 [2024-07-22 16:57:02.269461] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:42.762 [2024-07-22 16:57:02.269475] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:43:43.693 [2024-07-22 16:57:03.271589] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:43:43.693 request: 00:43:43.693 { 00:43:43.693 "name": "nvme_second", 00:43:43.693 "trtype": "tcp", 00:43:43.693 "traddr": "10.0.0.2", 00:43:43.693 "hostnqn": "nqn.2021-12.io.spdk:test", 00:43:43.693 "adrfam": "ipv4", 00:43:43.693 "trsvcid": "8010", 00:43:43.693 "attach_timeout_ms": 3000, 00:43:43.693 "method": "bdev_nvme_start_discovery", 00:43:43.693 "req_id": 1 00:43:43.693 } 00:43:43.693 Got JSON-RPC error response 00:43:43.693 response: 00:43:43.693 { 00:43:43.693 "code": -110, 00:43:43.693 "message": "Connection timed out" 00:43:43.693 } 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2927075 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:43.693 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:43.693 rmmod nvme_tcp 00:43:43.950 rmmod nvme_fabrics 00:43:43.950 rmmod nvme_keyring 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2926935 ']' 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2926935 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 2926935 ']' 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 2926935 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2926935 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2926935' 00:43:43.950 killing process with pid 2926935 00:43:43.950 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 2926935 00:43:43.951 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 2926935 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:44.209 16:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:46.108 16:57:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:46.108 00:43:46.108 real 0m13.794s 00:43:46.108 user 0m19.425s 00:43:46.108 sys 0m3.152s 00:43:46.108 16:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:46.108 16:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:43:46.108 ************************************ 00:43:46.108 END TEST nvmf_host_discovery 00:43:46.108 ************************************ 00:43:46.108 16:57:05 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:43:46.108 16:57:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:43:46.108 16:57:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:46.108 16:57:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:46.366 ************************************ 00:43:46.366 START TEST nvmf_host_multipath_status 00:43:46.366 ************************************ 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:43:46.366 * Looking for test storage... 00:43:46.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.366 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:43:46.367 16:57:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:43:48.897 Found 0000:82:00.0 (0x8086 - 0x159b) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:43:48.897 Found 0000:82:00.1 (0x8086 - 0x159b) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:43:48.897 Found net devices under 0000:82:00.0: cvl_0_0 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:43:48.897 Found net devices under 0000:82:00.1: cvl_0_1 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:48.897 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:48.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:48.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:43:48.898 00:43:48.898 --- 10.0.0.2 ping statistics --- 00:43:48.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.898 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:48.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:48.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:43:48.898 00:43:48.898 --- 10.0.0.1 ping statistics --- 00:43:48.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:48.898 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2930996 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2930996 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2930996 ']' 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:48.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:48.898 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:43:49.157 [2024-07-22 16:57:08.558553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:49.157 [2024-07-22 16:57:08.558644] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:49.157 EAL: No free 2048 kB hugepages reported on node 1 00:43:49.157 [2024-07-22 16:57:08.631440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:49.157 [2024-07-22 16:57:08.715569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:49.157 [2024-07-22 16:57:08.715621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:49.157 [2024-07-22 16:57:08.715649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:49.157 [2024-07-22 16:57:08.715661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:49.157 [2024-07-22 16:57:08.715670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:49.157 [2024-07-22 16:57:08.715750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:49.157 [2024-07-22 16:57:08.715754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2930996 00:43:49.415 16:57:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:49.673 [2024-07-22 16:57:09.076922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:49.673 16:57:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:49.931 Malloc0 00:43:49.931 16:57:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:43:50.189 16:57:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:50.446 16:57:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:50.704 [2024-07-22 16:57:10.127984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:50.704 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:50.962 [2024-07-22 16:57:10.388745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2931307 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2931307 /var/tmp/bdevperf.sock 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2931307 ']' 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:50.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:50.962 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:43:51.220 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:51.220 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:43:51.220 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:51.477 16:57:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:43:51.746 Nvme0n1 00:43:51.746 16:57:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:43:52.314 Nvme0n1 00:43:52.314 16:57:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:43:52.314 16:57:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:43:54.213 16:57:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:43:54.213 16:57:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:43:54.471 16:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:54.728 16:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:43:56.102 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:56.360 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:43:56.360 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:43:56.360 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:56.360 16:57:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:43:56.618 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:56.618 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:43:56.618 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:56.618 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:43:56.876 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:56.876 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:43:56.876 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:56.876 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:43:57.134 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:57.134 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:43:57.134 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:57.134 16:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:43:57.392 16:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:57.392 16:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:43:57.392 16:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:43:57.957 16:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:57.957 16:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:59.331 16:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:43:59.589 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:59.589 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:43:59.589 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:59.589 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:43:59.847 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:43:59.847 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:43:59.847 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:43:59.847 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:00.105 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:00.105 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:00.106 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:00.106 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:00.364 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:00.364 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:00.364 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:00.364 16:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:00.622 16:57:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:00.622 16:57:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:44:00.622 16:57:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:00.880 16:57:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:44:01.446 16:57:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:44:02.378 16:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:44:02.378 16:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:44:02.378 16:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:02.378 16:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:02.636 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:02.636 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:44:02.636 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:02.636 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:02.894 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:02.894 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:02.894 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:02.894 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:03.152 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:03.152 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:03.152 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:03.152 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:03.410 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:03.410 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:03.410 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:03.410 16:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:03.667 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:03.667 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:03.667 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:03.667 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:03.924 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:03.924 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:44:03.924 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:04.182 16:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:44:04.439 16:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:05.812 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:06.070 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:06.070 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:06.070 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:06.070 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:06.328 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:06.328 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:06.328 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:06.328 16:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:06.585 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:06.585 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:06.585 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:06.585 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:06.843 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:06.843 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:44:06.843 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:06.843 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:07.101 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:07.101 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:44:07.101 16:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:44:07.666 16:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:44:07.666 16:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:09.038 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:09.296 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:09.296 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:09.296 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:09.296 16:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:09.553 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:09.553 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:09.553 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:09.553 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:09.810 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:09.810 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:44:09.810 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:09.810 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:10.067 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:10.068 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:44:10.068 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:10.068 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:10.325 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:10.325 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:44:10.325 16:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:44:10.583 16:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:10.840 16:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:44:11.772 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:44:11.772 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:44:11.772 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:11.772 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:12.028 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:12.028 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:44:12.028 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:12.029 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:12.286 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:12.286 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:12.286 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:12.286 16:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:12.544 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:12.544 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:12.544 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:12.544 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:12.800 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:12.800 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:44:12.800 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:12.800 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:13.058 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:13.058 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:13.058 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:13.058 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:13.316 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:13.316 16:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:44:13.572 16:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:44:13.572 16:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:44:13.829 16:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:14.086 16:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:44:15.458 16:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:44:15.458 16:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:44:15.458 16:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:15.458 16:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:15.458 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:15.458 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:44:15.458 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:15.458 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:15.715 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:15.715 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:15.716 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:15.716 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:15.973 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:15.973 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:15.973 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:15.973 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:16.538 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:16.538 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:16.538 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:16.538 16:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:16.538 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:16.538 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:16.538 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:16.538 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:17.107 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:17.107 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:44:17.107 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:17.107 16:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:17.413 16:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:44:18.399 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:44:18.399 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:44:18.399 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:18.399 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:18.657 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:18.657 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:44:18.657 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:18.657 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:18.915 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:18.915 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:18.915 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:18.915 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:19.173 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:19.173 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:19.173 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:19.173 16:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:19.739 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:19.997 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:19.997 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:44:19.997 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:20.255 16:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:44:20.820 16:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:44:21.753 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:44:21.753 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:44:21.753 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:21.753 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:22.011 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:22.011 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:44:22.011 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:22.011 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:22.269 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:22.269 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:22.269 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:22.269 16:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:22.527 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:22.527 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:22.527 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:22.527 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:22.785 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:22.785 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:22.785 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:22.785 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:23.043 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:23.043 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:44:23.043 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:23.043 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:23.301 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:23.301 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:44:23.301 16:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:23.559 16:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:44:24.125 16:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:44:25.058 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:44:25.058 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:44:25.058 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:25.058 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:44:25.316 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:25.316 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:44:25.316 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:44:25.316 16:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:25.574 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:25.574 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:44:25.574 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:25.574 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:44:25.832 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:25.832 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:44:25.832 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:25.832 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:44:26.090 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:26.090 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:44:26.090 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:26.090 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:44:26.348 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:44:26.348 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:44:26.348 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:44:26.348 16:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2931307 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2931307 ']' 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2931307 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2931307 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2931307' 00:44:26.606 killing process with pid 2931307 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2931307 00:44:26.606 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2931307 00:44:26.890 Connection closed with partial response: 00:44:26.890 00:44:26.890 00:44:26.890 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2931307 00:44:26.890 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:44:26.890 [2024-07-22 16:57:10.446339] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:26.890 [2024-07-22 16:57:10.446426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931307 ] 00:44:26.890 EAL: No free 2048 kB hugepages reported on node 1 00:44:26.890 [2024-07-22 16:57:10.515989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.890 [2024-07-22 16:57:10.600212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:26.890 Running I/O for 90 seconds... 00:44:26.890 [2024-07-22 16:57:26.995708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.995773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.995825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.995844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.995907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.995924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.995962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.890 [2024-07-22 16:57:26.996825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.890 [2024-07-22 16:57:26.996841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.996863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.996879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.996901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.996917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.996939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.996979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.997947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.997973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.891 [2024-07-22 16:57:26.998732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.998947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.891 [2024-07-22 16:57:26.998971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.891 [2024-07-22 16:57:26.999013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.892 [2024-07-22 16:57:26.999401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:26.999950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:26.999971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.892 [2024-07-22 16:57:27.000527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.892 [2024-07-22 16:57:27.000543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.000564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.000580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.000601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.000617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.000639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.000654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.000676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.000692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.001976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.001996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.893 [2024-07-22 16:57:27.002845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.893 [2024-07-22 16:57:27.002866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.002882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.002903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.002919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.002955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.002980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.003554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.894 [2024-07-22 16:57:27.004535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.894 [2024-07-22 16:57:27.004551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.004778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.004816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.004975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.004999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.895 [2024-07-22 16:57:27.005916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.005954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.005985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.006004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.006027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.006043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.006066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.006082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.895 [2024-07-22 16:57:27.006103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.895 [2024-07-22 16:57:27.006123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.006957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.006987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.007027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.007066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.007109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.007148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.007187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.007204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.896 [2024-07-22 16:57:27.008525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.896 [2024-07-22 16:57:27.008541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.008999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.009999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.010022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.010038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.897 [2024-07-22 16:57:27.010060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.897 [2024-07-22 16:57:27.010076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.010984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.898 [2024-07-22 16:57:27.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.011943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.011984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.012009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.012027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.012050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.012067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.012089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.012127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.898 [2024-07-22 16:57:27.012144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.898 [2024-07-22 16:57:27.012166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.899 [2024-07-22 16:57:27.012436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.012971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.012998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.899 [2024-07-22 16:57:27.013532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.899 [2024-07-22 16:57:27.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.013569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.013591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.013606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.013627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.013642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.013664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.013679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.013701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.013716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.014959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.014992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.900 [2024-07-22 16:57:27.015829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.900 [2024-07-22 16:57:27.015844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.015864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.015879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.015900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.015936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.015975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.016569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.016585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.901 [2024-07-22 16:57:27.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.901 [2024-07-22 16:57:27.017811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.901 [2024-07-22 16:57:27.017938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.901 [2024-07-22 16:57:27.017979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.902 [2024-07-22 16:57:27.018907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.018959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.018992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.902 [2024-07-22 16:57:27.019495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.902 [2024-07-22 16:57:27.019510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.019958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.020011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.020033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.020049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.020088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.020899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.020925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.020976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.020997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.903 [2024-07-22 16:57:27.021591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.903 [2024-07-22 16:57:27.021606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.021974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.021993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.022889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.022905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.023536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.023558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.023582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.023599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.023620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.023635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.023656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.023672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.904 [2024-07-22 16:57:27.023693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.904 [2024-07-22 16:57:27.023709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.023978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.023998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.905 [2024-07-22 16:57:27.024707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.024937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.024982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.905 [2024-07-22 16:57:27.025267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.905 [2024-07-22 16:57:27.025283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.906 [2024-07-22 16:57:27.025337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.906 [2024-07-22 16:57:27.025373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.025940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.025978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.026489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.026504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.906 [2024-07-22 16:57:27.027570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.906 [2024-07-22 16:57:27.027585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.027973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.027999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.028936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.029005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.029028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.907 [2024-07-22 16:57:27.029044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.907 [2024-07-22 16:57:27.029073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.029979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.029999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.908 [2024-07-22 16:57:27.030598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.908 [2024-07-22 16:57:27.030636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.908 [2024-07-22 16:57:27.030834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.908 [2024-07-22 16:57:27.030849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.030869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.030884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.030905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.030920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.030940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.030981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.909 [2024-07-22 16:57:27.031728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.031958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.031991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.909 [2024-07-22 16:57:27.032315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.909 [2024-07-22 16:57:27.032330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.032783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.032798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.033974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.033993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.910 [2024-07-22 16:57:27.034551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.910 [2024-07-22 16:57:27.034571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.034962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.034994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.035615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.035630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.911 [2024-07-22 16:57:27.036614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.911 [2024-07-22 16:57:27.036630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.036885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.036920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.036955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.036982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.912 [2024-07-22 16:57:27.037467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.037938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.912 [2024-07-22 16:57:27.037984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.912 [2024-07-22 16:57:27.038001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.913 [2024-07-22 16:57:27.038055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.913 [2024-07-22 16:57:27.038094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.038975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.038993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.039047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.039070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.039086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.039108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.039125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.039152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.039169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.039982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.040006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.040033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.040051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.040074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.040090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.040112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.040129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.913 [2024-07-22 16:57:27.040150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.913 [2024-07-22 16:57:27.040166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.040959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.040993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.914 [2024-07-22 16:57:27.041666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.914 [2024-07-22 16:57:27.041682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.041958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.041991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.042977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.042995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.915 [2024-07-22 16:57:27.043384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.915 [2024-07-22 16:57:27.043423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.915 [2024-07-22 16:57:27.043766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.915 [2024-07-22 16:57:27.043781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.043801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.043816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.043836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.043851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.043875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.043891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.043911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.043927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.043972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.043991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.916 [2024-07-22 16:57:27.044527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.044945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.916 [2024-07-22 16:57:27.045297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.916 [2024-07-22 16:57:27.045324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.045402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.045439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.045475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.045511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.045527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.046938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.046985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.917 [2024-07-22 16:57:27.047381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.917 [2024-07-22 16:57:27.047396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.047950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.047992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.048940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.048982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.918 [2024-07-22 16:57:27.049426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.918 [2024-07-22 16:57:27.049440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.049694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.049729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.049942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.049992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.919 [2024-07-22 16:57:27.050792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.919 [2024-07-22 16:57:27.050863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.919 [2024-07-22 16:57:27.050884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.050903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.050939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.050985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.051764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.051779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.052945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.052960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.053062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.053099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.053140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.920 [2024-07-22 16:57:27.053178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.920 [2024-07-22 16:57:27.053199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.053941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.053957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.054459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.060192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.060222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.060268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.060287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.060522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.060545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.921 [2024-07-22 16:57:27.060589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.921 [2024-07-22 16:57:27.060609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.060930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.060984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.922 [2024-07-22 16:57:27.061472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.922 [2024-07-22 16:57:27.061512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.061938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.061977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.062005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.922 [2024-07-22 16:57:27.062022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.922 [2024-07-22 16:57:27.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.923 [2024-07-22 16:57:27.062697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.062935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.062985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.923 [2024-07-22 16:57:27.063686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.923 [2024-07-22 16:57:27.063702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:27.063727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:27.063743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:27.063920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:27.063955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.476942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.476971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.924 [2024-07-22 16:57:43.477638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.924 [2024-07-22 16:57:43.477751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.924 [2024-07-22 16:57:43.477792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.924 [2024-07-22 16:57:43.477831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.924 [2024-07-22 16:57:43.477853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.924 [2024-07-22 16:57:43.477869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.477890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.477906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.477929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.477945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.477991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.478046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.478084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.478123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.478161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.478199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.478215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.479949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.479993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.925 [2024-07-22 16:57:43.480210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.480941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.480958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.925 [2024-07-22 16:57:43.481301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.925 [2024-07-22 16:57:43.481317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.481353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.481390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.481427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.481464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.481947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.481992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.482053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.482137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.926 [2024-07-22 16:57:43.482344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.482875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.482918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.482941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.482957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.483006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.483024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.483046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.483063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.483085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.926 [2024-07-22 16:57:43.483101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.926 [2024-07-22 16:57:43.483128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.483763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.483837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.484687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.484944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.484972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.485012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.485333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.485370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.485437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.927 [2024-07-22 16:57:43.485453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.486706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.927 [2024-07-22 16:57:43.486730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.927 [2024-07-22 16:57:43.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.486789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.486811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.486826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.486864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.486886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.486901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.486938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.486984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.487845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.928 [2024-07-22 16:57:43.489164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.928 [2024-07-22 16:57:43.489435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.928 [2024-07-22 16:57:43.489451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.490938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.490989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.491671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.491934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.491982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.492001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.492041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.492079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.492117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.492156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.492193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.492232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.492259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.492292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.493685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.493746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.929 [2024-07-22 16:57:43.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.493821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.493858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.929 [2024-07-22 16:57:43.493879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.929 [2024-07-22 16:57:43.493895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.493916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.493932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.493976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.493995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.494704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.494761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.494777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.495337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.495360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.495403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.495425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.495441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.495461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.495477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.495498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.495525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.497429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.497488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.497538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.930 [2024-07-22 16:57:43.497789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.930 [2024-07-22 16:57:43.497824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.930 [2024-07-22 16:57:43.497845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.497860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.497881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.497896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.497916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.497931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.497979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.497999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.498597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.498633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.499642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.499701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.499738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.499773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.499809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.499845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.499880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.499916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.499936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.499981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.500007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.500025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.500046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.500063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.500085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.931 [2024-07-22 16:57:43.500101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.500123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.500139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.931 [2024-07-22 16:57:43.500161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.931 [2024-07-22 16:57:43.500177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.500216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.500752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.500795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.500832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.500868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.500904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.500924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.500976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.501591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.501647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.501662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.932 [2024-07-22 16:57:43.502550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.932 [2024-07-22 16:57:43.502657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.932 [2024-07-22 16:57:43.502677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.502692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.502713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.502728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.504928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.504972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.504992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.505032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.505107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.505145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.505183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.505221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.505243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.505273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.507344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.933 [2024-07-22 16:57:43.507737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.933 [2024-07-22 16:57:43.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.933 [2024-07-22 16:57:43.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.507792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.507807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.507828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.507843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.507863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.507878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.507903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.507919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.507940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.507986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.508028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.508419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.508930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.508975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.509062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.509100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.509138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.509177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.509214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.509530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.509546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.510656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.510699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.934 [2024-07-22 16:57:43.510736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.510771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.510806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.510842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.934 [2024-07-22 16:57:43.510877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.934 [2024-07-22 16:57:43.510897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.510912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.510933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.510978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.511685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.511827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.514133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.935 [2024-07-22 16:57:43.514177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.514273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.935 [2024-07-22 16:57:43.514296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.935 [2024-07-22 16:57:43.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.514541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.514760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.514831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.514866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.514902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.514938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.514983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.515079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.515463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.517592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.517652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.936 [2024-07-22 16:57:43.517890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.936 [2024-07-22 16:57:43.517911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.936 [2024-07-22 16:57:43.517926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.517962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.518567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.518705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.518726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.937 [2024-07-22 16:57:43.520768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:26.937 [2024-07-22 16:57:43.520825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.937 [2024-07-22 16:57:43.520841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.520864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.520901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.520918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.520940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.520956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.521620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.521699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.525663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.525941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.525983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.526008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.938 [2024-07-22 16:57:43.526026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.938 [2024-07-22 16:57:43.526064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:26.938 [2024-07-22 16:57:43.526086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.526873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.526951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.527039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.527116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.527194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.527232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.527277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.527316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.527338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.527359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.528279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.528342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.939 [2024-07-22 16:57:43.528398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.939 [2024-07-22 16:57:43.528631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:26.939 [2024-07-22 16:57:43.528653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.528906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:26.940 [2024-07-22 16:57:43.528925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:26.940 [2024-07-22 16:57:43.529601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:26.940 [2024-07-22 16:57:43.529618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:26.940 Received shutdown signal, test time was about 34.207125 seconds 00:44:26.940 00:44:26.940 Latency(us) 00:44:26.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:26.940 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:26.940 Verification LBA range: start 0x0 length 0x4000 00:44:26.940 Nvme0n1 : 34.21 8451.58 33.01 0.00 0.00 15119.38 179.01 4101097.24 00:44:26.940 =================================================================================================================== 00:44:26.940 Total : 8451.58 33.01 0.00 0.00 15119.38 179.01 4101097.24 00:44:26.940 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:27.198 rmmod nvme_tcp 00:44:27.198 rmmod nvme_fabrics 00:44:27.198 rmmod nvme_keyring 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2930996 ']' 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2930996 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2930996 ']' 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2930996 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2930996 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2930996' 00:44:27.198 killing process with pid 2930996 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2930996 00:44:27.198 16:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2930996 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:27.457 16:57:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:29.987 16:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:29.987 00:44:29.987 real 0m43.286s 00:44:29.987 user 2m9.534s 00:44:29.987 sys 0m12.114s 00:44:29.987 16:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:44:29.987 16:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:44:29.987 ************************************ 00:44:29.987 END TEST nvmf_host_multipath_status 00:44:29.987 ************************************ 00:44:29.987 16:57:49 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:44:29.987 16:57:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:44:29.987 16:57:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:44:29.987 16:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:29.987 ************************************ 00:44:29.987 START TEST nvmf_discovery_remove_ifc 00:44:29.987 ************************************ 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:44:29.987 * Looking for test storage... 00:44:29.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:29.987 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:44:29.988 16:57:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:44:31.888 Found 0000:82:00.0 (0x8086 - 0x159b) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:44:31.888 Found 0000:82:00.1 (0x8086 - 0x159b) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:44:31.888 Found net devices under 0000:82:00.0: cvl_0_0 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:44:31.888 Found net devices under 0000:82:00.1: cvl_0_1 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:31.888 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:32.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:32.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:44:32.147 00:44:32.147 --- 10.0.0.2 ping statistics --- 00:44:32.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.147 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:32.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:32.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:44:32.147 00:44:32.147 --- 10.0.0.1 ping statistics --- 00:44:32.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.147 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2938035 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2938035 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2938035 ']' 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:32.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:32.147 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.147 [2024-07-22 16:57:51.711716] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:32.147 [2024-07-22 16:57:51.711801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:32.147 EAL: No free 2048 kB hugepages reported on node 1 00:44:32.147 [2024-07-22 16:57:51.784294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.405 [2024-07-22 16:57:51.868681] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:32.405 [2024-07-22 16:57:51.868732] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:32.405 [2024-07-22 16:57:51.868762] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:32.405 [2024-07-22 16:57:51.868774] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:32.405 [2024-07-22 16:57:51.868785] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:32.405 [2024-07-22 16:57:51.868827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:32.405 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:32.405 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:44:32.405 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:32.405 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:32.405 16:57:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.405 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:32.405 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:44:32.405 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:32.405 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.405 [2024-07-22 16:57:52.016195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:32.405 [2024-07-22 16:57:52.024409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:44:32.405 null0 00:44:32.664 [2024-07-22 16:57:52.056359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2938062 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2938062 /tmp/host.sock 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2938062 ']' 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:44:32.664 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:32.664 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.664 [2024-07-22 16:57:52.118905] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:32.664 [2024-07-22 16:57:52.118992] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938062 ] 00:44:32.664 EAL: No free 2048 kB hugepages reported on node 1 00:44:32.664 [2024-07-22 16:57:52.188743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.664 [2024-07-22 16:57:52.279741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:32.923 16:57:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:33.855 [2024-07-22 16:57:53.453845] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:44:33.855 [2024-07-22 16:57:53.453883] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:44:33.855 [2024-07-22 16:57:53.453908] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:44:34.112 [2024-07-22 16:57:53.581347] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:44:34.112 [2024-07-22 16:57:53.644847] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:44:34.112 [2024-07-22 16:57:53.644925] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:44:34.112 [2024-07-22 16:57:53.644981] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:44:34.112 [2024-07-22 16:57:53.645022] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:44:34.112 [2024-07-22 16:57:53.645056] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:34.112 [2024-07-22 16:57:53.652274] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x67d900 was disconnected and freed. delete nvme_qpair. 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:34.112 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:34.370 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:34.370 16:57:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:35.303 16:57:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:36.235 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:36.236 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:36.236 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:36.236 16:57:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:37.606 16:57:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:38.539 16:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:39.472 16:57:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:39.472 [2024-07-22 16:57:59.085852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:44:39.472 [2024-07-22 16:57:59.085923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:39.472 [2024-07-22 16:57:59.085948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:39.472 [2024-07-22 16:57:59.085982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:39.472 [2024-07-22 16:57:59.086013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:39.472 [2024-07-22 16:57:59.086033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:39.472 [2024-07-22 16:57:59.086046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:39.472 [2024-07-22 16:57:59.086059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:39.472 [2024-07-22 16:57:59.086072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:39.472 [2024-07-22 16:57:59.086085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:44:39.472 [2024-07-22 16:57:59.086097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:39.472 [2024-07-22 16:57:59.086109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644990 is same with the state(5) to be set 00:44:39.472 [2024-07-22 16:57:59.095872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x644990 (9): Bad file descriptor 00:44:39.472 [2024-07-22 16:57:59.105919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:40.406 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:40.663 [2024-07-22 16:58:00.153001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:44:40.663 [2024-07-22 16:58:00.153075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x644990 with addr=10.0.0.2, port=4420 00:44:40.663 [2024-07-22 16:58:00.153104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644990 is same with the state(5) to be set 00:44:40.663 [2024-07-22 16:58:00.153153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x644990 (9): Bad file descriptor 00:44:40.663 [2024-07-22 16:58:00.153621] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:44:40.663 [2024-07-22 16:58:00.153658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:40.663 [2024-07-22 16:58:00.153676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:40.663 [2024-07-22 16:58:00.153695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:40.663 [2024-07-22 16:58:00.153736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:40.663 [2024-07-22 16:58:00.153757] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:40.664 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:40.664 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:44:40.664 16:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:41.603 [2024-07-22 16:58:01.156250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:41.603 [2024-07-22 16:58:01.156297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:41.603 [2024-07-22 16:58:01.156314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:41.603 [2024-07-22 16:58:01.156329] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:44:41.603 [2024-07-22 16:58:01.156351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:41.603 [2024-07-22 16:58:01.156389] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:44:41.603 [2024-07-22 16:58:01.156432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:41.603 [2024-07-22 16:58:01.156456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:41.603 [2024-07-22 16:58:01.156489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:41.603 [2024-07-22 16:58:01.156505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:41.603 [2024-07-22 16:58:01.156522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:41.603 [2024-07-22 16:58:01.156538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:41.603 [2024-07-22 16:58:01.156556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:41.603 [2024-07-22 16:58:01.156580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:41.603 [2024-07-22 16:58:01.156596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:44:41.603 [2024-07-22 16:58:01.156613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:41.603 [2024-07-22 16:58:01.156631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:44:41.603 [2024-07-22 16:58:01.156796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x643de0 (9): Bad file descriptor 00:44:41.603 [2024-07-22 16:58:01.157820] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:44:41.603 [2024-07-22 16:58:01.157847] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:41.603 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:44:41.861 16:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:44:42.793 16:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:43.725 [2024-07-22 16:58:03.169479] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:44:43.725 [2024-07-22 16:58:03.169508] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:44:43.725 [2024-07-22 16:58:03.169534] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:44:43.725 [2024-07-22 16:58:03.300961] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:44:43.725 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:43.725 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:43.725 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:43.725 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:43.726 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:43.726 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:43.726 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:43.726 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:43.984 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:44:43.984 16:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:44:43.984 [2024-07-22 16:58:03.520671] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:44:43.984 [2024-07-22 16:58:03.520729] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:44:43.984 [2024-07-22 16:58:03.520766] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:44:43.984 [2024-07-22 16:58:03.520792] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:44:43.984 [2024-07-22 16:58:03.520806] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:44:43.984 [2024-07-22 16:58:03.569737] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x65ef60 was disconnected and freed. delete nvme_qpair. 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2938062 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2938062 ']' 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2938062 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2938062 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2938062' 00:44:44.917 killing process with pid 2938062 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2938062 00:44:44.917 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2938062 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:45.175 rmmod nvme_tcp 00:44:45.175 rmmod nvme_fabrics 00:44:45.175 rmmod nvme_keyring 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2938035 ']' 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2938035 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2938035 ']' 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2938035 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2938035 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2938035' 00:44:45.175 killing process with pid 2938035 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2938035 00:44:45.175 16:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2938035 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:45.432 16:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:47.964 16:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:47.964 00:44:47.964 real 0m17.958s 00:44:47.964 user 0m25.489s 00:44:47.964 sys 0m3.338s 00:44:47.964 16:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:44:47.964 16:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:44:47.964 ************************************ 00:44:47.964 END TEST nvmf_discovery_remove_ifc 00:44:47.964 ************************************ 00:44:47.964 16:58:07 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:44:47.964 16:58:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:44:47.964 16:58:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:44:47.964 16:58:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:47.964 ************************************ 00:44:47.964 START TEST nvmf_identify_kernel_target 00:44:47.964 ************************************ 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:44:47.964 * Looking for test storage... 00:44:47.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:47.964 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:44:47.965 16:58:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:50.495 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:44:50.496 Found 0000:82:00.0 (0x8086 - 0x159b) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:44:50.496 Found 0000:82:00.1 (0x8086 - 0x159b) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:44:50.496 Found net devices under 0000:82:00.0: cvl_0_0 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:44:50.496 Found net devices under 0000:82:00.1: cvl_0_1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:50.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:50.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:44:50.496 00:44:50.496 --- 10.0.0.2 ping statistics --- 00:44:50.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:50.496 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:50.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:50.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:44:50.496 00:44:50.496 --- 10.0.0.1 ping statistics --- 00:44:50.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:50.496 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:50.496 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:50.497 16:58:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:51.432 Waiting for block devices as requested 00:44:51.432 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:44:51.690 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:51.690 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:51.690 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:51.948 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:51.948 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:51.948 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:51.948 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:52.207 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:52.207 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:52.207 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:52.207 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:52.466 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:52.466 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:52.466 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:52.724 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:52.724 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:52.724 No valid GPT data, bailing 00:44:52.724 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:44:52.983 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:44:52.984 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:44:52.984 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:44:52.984 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:52.984 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:44:52.984 00:44:52.984 Discovery Log Number of Records 2, Generation counter 2 00:44:52.984 =====Discovery Log Entry 0====== 00:44:52.984 trtype: tcp 00:44:52.984 adrfam: ipv4 00:44:52.984 subtype: current discovery subsystem 00:44:52.984 treq: not specified, sq flow control disable supported 00:44:52.984 portid: 1 00:44:52.984 trsvcid: 4420 00:44:52.984 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:52.984 traddr: 10.0.0.1 00:44:52.984 eflags: none 00:44:52.984 sectype: none 00:44:52.984 =====Discovery Log Entry 1====== 00:44:52.984 trtype: tcp 00:44:52.984 adrfam: ipv4 00:44:52.984 subtype: nvme subsystem 00:44:52.984 treq: not specified, sq flow control disable supported 00:44:52.984 portid: 1 00:44:52.984 trsvcid: 4420 00:44:52.984 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:52.984 traddr: 10.0.0.1 00:44:52.984 eflags: none 00:44:52.984 sectype: none 00:44:52.984 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:44:52.984 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:44:52.984 EAL: No free 2048 kB hugepages reported on node 1 00:44:52.984 ===================================================== 00:44:52.984 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:44:52.984 ===================================================== 00:44:52.984 Controller Capabilities/Features 00:44:52.984 ================================ 00:44:52.984 Vendor ID: 0000 00:44:52.984 Subsystem Vendor ID: 0000 00:44:52.984 Serial Number: 22816b89ada271915e60 00:44:52.984 Model Number: Linux 00:44:52.984 Firmware Version: 6.7.0-68 00:44:52.984 Recommended Arb Burst: 0 00:44:52.984 IEEE OUI Identifier: 00 00 00 00:44:52.984 Multi-path I/O 00:44:52.984 May have multiple subsystem ports: No 00:44:52.984 May have multiple controllers: No 00:44:52.984 Associated with SR-IOV VF: No 00:44:52.984 Max Data Transfer Size: Unlimited 00:44:52.984 Max Number of Namespaces: 0 00:44:52.984 Max Number of I/O Queues: 1024 00:44:52.984 NVMe Specification Version (VS): 1.3 00:44:52.984 NVMe Specification Version (Identify): 1.3 00:44:52.984 Maximum Queue Entries: 1024 00:44:52.984 Contiguous Queues Required: No 00:44:52.984 Arbitration Mechanisms Supported 00:44:52.984 Weighted Round Robin: Not Supported 00:44:52.984 Vendor Specific: Not Supported 00:44:52.984 Reset Timeout: 7500 ms 00:44:52.984 Doorbell Stride: 4 bytes 00:44:52.984 NVM Subsystem Reset: Not Supported 00:44:52.984 Command Sets Supported 00:44:52.984 NVM Command Set: Supported 00:44:52.984 Boot Partition: Not Supported 00:44:52.984 Memory Page Size Minimum: 4096 bytes 00:44:52.984 Memory Page Size Maximum: 4096 bytes 00:44:52.984 Persistent Memory Region: Not Supported 00:44:52.984 Optional Asynchronous Events Supported 00:44:52.984 Namespace Attribute Notices: Not Supported 00:44:52.984 Firmware Activation Notices: Not Supported 00:44:52.984 ANA Change Notices: Not Supported 00:44:52.984 PLE Aggregate Log Change Notices: Not Supported 00:44:52.984 LBA Status Info Alert Notices: Not Supported 00:44:52.984 EGE Aggregate Log Change Notices: Not Supported 00:44:52.984 Normal NVM Subsystem Shutdown event: Not Supported 00:44:52.984 Zone Descriptor Change Notices: Not Supported 00:44:52.984 Discovery Log Change Notices: Supported 00:44:52.984 Controller Attributes 00:44:52.984 128-bit Host Identifier: Not Supported 00:44:52.984 Non-Operational Permissive Mode: Not Supported 00:44:52.984 NVM Sets: Not Supported 00:44:52.984 Read Recovery Levels: Not Supported 00:44:52.984 Endurance Groups: Not Supported 00:44:52.984 Predictable Latency Mode: Not Supported 00:44:52.984 Traffic Based Keep ALive: Not Supported 00:44:52.984 Namespace Granularity: Not Supported 00:44:52.984 SQ Associations: Not Supported 00:44:52.984 UUID List: Not Supported 00:44:52.984 Multi-Domain Subsystem: Not Supported 00:44:52.984 Fixed Capacity Management: Not Supported 00:44:52.984 Variable Capacity Management: Not Supported 00:44:52.984 Delete Endurance Group: Not Supported 00:44:52.984 Delete NVM Set: Not Supported 00:44:52.984 Extended LBA Formats Supported: Not Supported 00:44:52.984 Flexible Data Placement Supported: Not Supported 00:44:52.984 00:44:52.984 Controller Memory Buffer Support 00:44:52.984 ================================ 00:44:52.984 Supported: No 00:44:52.984 00:44:52.984 Persistent Memory Region Support 00:44:52.984 ================================ 00:44:52.984 Supported: No 00:44:52.984 00:44:52.984 Admin Command Set Attributes 00:44:52.984 ============================ 00:44:52.984 Security Send/Receive: Not Supported 00:44:52.984 Format NVM: Not Supported 00:44:52.984 Firmware Activate/Download: Not Supported 00:44:52.984 Namespace Management: Not Supported 00:44:52.984 Device Self-Test: Not Supported 00:44:52.984 Directives: Not Supported 00:44:52.984 NVMe-MI: Not Supported 00:44:52.984 Virtualization Management: Not Supported 00:44:52.984 Doorbell Buffer Config: Not Supported 00:44:52.984 Get LBA Status Capability: Not Supported 00:44:52.984 Command & Feature Lockdown Capability: Not Supported 00:44:52.984 Abort Command Limit: 1 00:44:52.984 Async Event Request Limit: 1 00:44:52.984 Number of Firmware Slots: N/A 00:44:52.984 Firmware Slot 1 Read-Only: N/A 00:44:52.984 Firmware Activation Without Reset: N/A 00:44:52.984 Multiple Update Detection Support: N/A 00:44:52.984 Firmware Update Granularity: No Information Provided 00:44:52.984 Per-Namespace SMART Log: No 00:44:52.984 Asymmetric Namespace Access Log Page: Not Supported 00:44:52.984 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:44:52.984 Command Effects Log Page: Not Supported 00:44:52.984 Get Log Page Extended Data: Supported 00:44:52.984 Telemetry Log Pages: Not Supported 00:44:52.984 Persistent Event Log Pages: Not Supported 00:44:52.984 Supported Log Pages Log Page: May Support 00:44:52.984 Commands Supported & Effects Log Page: Not Supported 00:44:52.984 Feature Identifiers & Effects Log Page:May Support 00:44:52.984 NVMe-MI Commands & Effects Log Page: May Support 00:44:52.984 Data Area 4 for Telemetry Log: Not Supported 00:44:52.984 Error Log Page Entries Supported: 1 00:44:52.984 Keep Alive: Not Supported 00:44:52.984 00:44:52.984 NVM Command Set Attributes 00:44:52.984 ========================== 00:44:52.984 Submission Queue Entry Size 00:44:52.984 Max: 1 00:44:52.984 Min: 1 00:44:52.984 Completion Queue Entry Size 00:44:52.984 Max: 1 00:44:52.984 Min: 1 00:44:52.984 Number of Namespaces: 0 00:44:52.984 Compare Command: Not Supported 00:44:52.984 Write Uncorrectable Command: Not Supported 00:44:52.984 Dataset Management Command: Not Supported 00:44:52.984 Write Zeroes Command: Not Supported 00:44:52.984 Set Features Save Field: Not Supported 00:44:52.984 Reservations: Not Supported 00:44:52.984 Timestamp: Not Supported 00:44:52.984 Copy: Not Supported 00:44:52.984 Volatile Write Cache: Not Present 00:44:52.984 Atomic Write Unit (Normal): 1 00:44:52.984 Atomic Write Unit (PFail): 1 00:44:52.984 Atomic Compare & Write Unit: 1 00:44:52.984 Fused Compare & Write: Not Supported 00:44:52.984 Scatter-Gather List 00:44:52.984 SGL Command Set: Supported 00:44:52.984 SGL Keyed: Not Supported 00:44:52.984 SGL Bit Bucket Descriptor: Not Supported 00:44:52.984 SGL Metadata Pointer: Not Supported 00:44:52.984 Oversized SGL: Not Supported 00:44:52.984 SGL Metadata Address: Not Supported 00:44:52.984 SGL Offset: Supported 00:44:52.984 Transport SGL Data Block: Not Supported 00:44:52.984 Replay Protected Memory Block: Not Supported 00:44:52.984 00:44:52.984 Firmware Slot Information 00:44:52.984 ========================= 00:44:52.984 Active slot: 0 00:44:52.984 00:44:52.984 00:44:52.984 Error Log 00:44:52.984 ========= 00:44:52.984 00:44:52.984 Active Namespaces 00:44:52.984 ================= 00:44:52.984 Discovery Log Page 00:44:52.984 ================== 00:44:52.984 Generation Counter: 2 00:44:52.984 Number of Records: 2 00:44:52.984 Record Format: 0 00:44:52.984 00:44:52.984 Discovery Log Entry 0 00:44:52.984 ---------------------- 00:44:52.984 Transport Type: 3 (TCP) 00:44:52.984 Address Family: 1 (IPv4) 00:44:52.984 Subsystem Type: 3 (Current Discovery Subsystem) 00:44:52.984 Entry Flags: 00:44:52.984 Duplicate Returned Information: 0 00:44:52.984 Explicit Persistent Connection Support for Discovery: 0 00:44:52.984 Transport Requirements: 00:44:52.985 Secure Channel: Not Specified 00:44:52.985 Port ID: 1 (0x0001) 00:44:52.985 Controller ID: 65535 (0xffff) 00:44:52.985 Admin Max SQ Size: 32 00:44:52.985 Transport Service Identifier: 4420 00:44:52.985 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:44:52.985 Transport Address: 10.0.0.1 00:44:52.985 Discovery Log Entry 1 00:44:52.985 ---------------------- 00:44:52.985 Transport Type: 3 (TCP) 00:44:52.985 Address Family: 1 (IPv4) 00:44:52.985 Subsystem Type: 2 (NVM Subsystem) 00:44:52.985 Entry Flags: 00:44:52.985 Duplicate Returned Information: 0 00:44:52.985 Explicit Persistent Connection Support for Discovery: 0 00:44:52.985 Transport Requirements: 00:44:52.985 Secure Channel: Not Specified 00:44:52.985 Port ID: 1 (0x0001) 00:44:52.985 Controller ID: 65535 (0xffff) 00:44:52.985 Admin Max SQ Size: 32 00:44:52.985 Transport Service Identifier: 4420 00:44:52.985 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:44:52.985 Transport Address: 10.0.0.1 00:44:52.985 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:52.985 EAL: No free 2048 kB hugepages reported on node 1 00:44:53.269 get_feature(0x01) failed 00:44:53.269 get_feature(0x02) failed 00:44:53.269 get_feature(0x04) failed 00:44:53.269 ===================================================== 00:44:53.269 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:53.269 ===================================================== 00:44:53.269 Controller Capabilities/Features 00:44:53.269 ================================ 00:44:53.269 Vendor ID: 0000 00:44:53.269 Subsystem Vendor ID: 0000 00:44:53.269 Serial Number: 734c326065d2882512ab 00:44:53.269 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:44:53.269 Firmware Version: 6.7.0-68 00:44:53.269 Recommended Arb Burst: 6 00:44:53.269 IEEE OUI Identifier: 00 00 00 00:44:53.269 Multi-path I/O 00:44:53.269 May have multiple subsystem ports: Yes 00:44:53.269 May have multiple controllers: Yes 00:44:53.270 Associated with SR-IOV VF: No 00:44:53.270 Max Data Transfer Size: Unlimited 00:44:53.270 Max Number of Namespaces: 1024 00:44:53.270 Max Number of I/O Queues: 128 00:44:53.270 NVMe Specification Version (VS): 1.3 00:44:53.270 NVMe Specification Version (Identify): 1.3 00:44:53.270 Maximum Queue Entries: 1024 00:44:53.270 Contiguous Queues Required: No 00:44:53.270 Arbitration Mechanisms Supported 00:44:53.270 Weighted Round Robin: Not Supported 00:44:53.270 Vendor Specific: Not Supported 00:44:53.270 Reset Timeout: 7500 ms 00:44:53.270 Doorbell Stride: 4 bytes 00:44:53.270 NVM Subsystem Reset: Not Supported 00:44:53.270 Command Sets Supported 00:44:53.270 NVM Command Set: Supported 00:44:53.270 Boot Partition: Not Supported 00:44:53.270 Memory Page Size Minimum: 4096 bytes 00:44:53.270 Memory Page Size Maximum: 4096 bytes 00:44:53.270 Persistent Memory Region: Not Supported 00:44:53.270 Optional Asynchronous Events Supported 00:44:53.270 Namespace Attribute Notices: Supported 00:44:53.270 Firmware Activation Notices: Not Supported 00:44:53.270 ANA Change Notices: Supported 00:44:53.270 PLE Aggregate Log Change Notices: Not Supported 00:44:53.270 LBA Status Info Alert Notices: Not Supported 00:44:53.270 EGE Aggregate Log Change Notices: Not Supported 00:44:53.270 Normal NVM Subsystem Shutdown event: Not Supported 00:44:53.270 Zone Descriptor Change Notices: Not Supported 00:44:53.270 Discovery Log Change Notices: Not Supported 00:44:53.270 Controller Attributes 00:44:53.270 128-bit Host Identifier: Supported 00:44:53.270 Non-Operational Permissive Mode: Not Supported 00:44:53.270 NVM Sets: Not Supported 00:44:53.270 Read Recovery Levels: Not Supported 00:44:53.270 Endurance Groups: Not Supported 00:44:53.270 Predictable Latency Mode: Not Supported 00:44:53.270 Traffic Based Keep ALive: Supported 00:44:53.270 Namespace Granularity: Not Supported 00:44:53.270 SQ Associations: Not Supported 00:44:53.270 UUID List: Not Supported 00:44:53.270 Multi-Domain Subsystem: Not Supported 00:44:53.270 Fixed Capacity Management: Not Supported 00:44:53.270 Variable Capacity Management: Not Supported 00:44:53.270 Delete Endurance Group: Not Supported 00:44:53.270 Delete NVM Set: Not Supported 00:44:53.270 Extended LBA Formats Supported: Not Supported 00:44:53.270 Flexible Data Placement Supported: Not Supported 00:44:53.270 00:44:53.270 Controller Memory Buffer Support 00:44:53.270 ================================ 00:44:53.270 Supported: No 00:44:53.270 00:44:53.270 Persistent Memory Region Support 00:44:53.270 ================================ 00:44:53.270 Supported: No 00:44:53.270 00:44:53.270 Admin Command Set Attributes 00:44:53.270 ============================ 00:44:53.270 Security Send/Receive: Not Supported 00:44:53.270 Format NVM: Not Supported 00:44:53.270 Firmware Activate/Download: Not Supported 00:44:53.270 Namespace Management: Not Supported 00:44:53.270 Device Self-Test: Not Supported 00:44:53.270 Directives: Not Supported 00:44:53.270 NVMe-MI: Not Supported 00:44:53.270 Virtualization Management: Not Supported 00:44:53.270 Doorbell Buffer Config: Not Supported 00:44:53.270 Get LBA Status Capability: Not Supported 00:44:53.270 Command & Feature Lockdown Capability: Not Supported 00:44:53.270 Abort Command Limit: 4 00:44:53.270 Async Event Request Limit: 4 00:44:53.270 Number of Firmware Slots: N/A 00:44:53.270 Firmware Slot 1 Read-Only: N/A 00:44:53.270 Firmware Activation Without Reset: N/A 00:44:53.270 Multiple Update Detection Support: N/A 00:44:53.270 Firmware Update Granularity: No Information Provided 00:44:53.270 Per-Namespace SMART Log: Yes 00:44:53.270 Asymmetric Namespace Access Log Page: Supported 00:44:53.270 ANA Transition Time : 10 sec 00:44:53.270 00:44:53.270 Asymmetric Namespace Access Capabilities 00:44:53.270 ANA Optimized State : Supported 00:44:53.270 ANA Non-Optimized State : Supported 00:44:53.270 ANA Inaccessible State : Supported 00:44:53.270 ANA Persistent Loss State : Supported 00:44:53.270 ANA Change State : Supported 00:44:53.270 ANAGRPID is not changed : No 00:44:53.270 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:44:53.270 00:44:53.270 ANA Group Identifier Maximum : 128 00:44:53.270 Number of ANA Group Identifiers : 128 00:44:53.270 Max Number of Allowed Namespaces : 1024 00:44:53.270 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:44:53.270 Command Effects Log Page: Supported 00:44:53.270 Get Log Page Extended Data: Supported 00:44:53.270 Telemetry Log Pages: Not Supported 00:44:53.270 Persistent Event Log Pages: Not Supported 00:44:53.270 Supported Log Pages Log Page: May Support 00:44:53.270 Commands Supported & Effects Log Page: Not Supported 00:44:53.270 Feature Identifiers & Effects Log Page:May Support 00:44:53.270 NVMe-MI Commands & Effects Log Page: May Support 00:44:53.270 Data Area 4 for Telemetry Log: Not Supported 00:44:53.270 Error Log Page Entries Supported: 128 00:44:53.270 Keep Alive: Supported 00:44:53.270 Keep Alive Granularity: 1000 ms 00:44:53.270 00:44:53.270 NVM Command Set Attributes 00:44:53.270 ========================== 00:44:53.270 Submission Queue Entry Size 00:44:53.270 Max: 64 00:44:53.270 Min: 64 00:44:53.270 Completion Queue Entry Size 00:44:53.270 Max: 16 00:44:53.270 Min: 16 00:44:53.270 Number of Namespaces: 1024 00:44:53.270 Compare Command: Not Supported 00:44:53.270 Write Uncorrectable Command: Not Supported 00:44:53.270 Dataset Management Command: Supported 00:44:53.270 Write Zeroes Command: Supported 00:44:53.270 Set Features Save Field: Not Supported 00:44:53.270 Reservations: Not Supported 00:44:53.270 Timestamp: Not Supported 00:44:53.270 Copy: Not Supported 00:44:53.270 Volatile Write Cache: Present 00:44:53.270 Atomic Write Unit (Normal): 1 00:44:53.270 Atomic Write Unit (PFail): 1 00:44:53.270 Atomic Compare & Write Unit: 1 00:44:53.270 Fused Compare & Write: Not Supported 00:44:53.270 Scatter-Gather List 00:44:53.270 SGL Command Set: Supported 00:44:53.270 SGL Keyed: Not Supported 00:44:53.270 SGL Bit Bucket Descriptor: Not Supported 00:44:53.270 SGL Metadata Pointer: Not Supported 00:44:53.270 Oversized SGL: Not Supported 00:44:53.270 SGL Metadata Address: Not Supported 00:44:53.270 SGL Offset: Supported 00:44:53.270 Transport SGL Data Block: Not Supported 00:44:53.270 Replay Protected Memory Block: Not Supported 00:44:53.270 00:44:53.270 Firmware Slot Information 00:44:53.270 ========================= 00:44:53.270 Active slot: 0 00:44:53.270 00:44:53.270 Asymmetric Namespace Access 00:44:53.270 =========================== 00:44:53.270 Change Count : 0 00:44:53.270 Number of ANA Group Descriptors : 1 00:44:53.270 ANA Group Descriptor : 0 00:44:53.270 ANA Group ID : 1 00:44:53.270 Number of NSID Values : 1 00:44:53.270 Change Count : 0 00:44:53.270 ANA State : 1 00:44:53.270 Namespace Identifier : 1 00:44:53.270 00:44:53.270 Commands Supported and Effects 00:44:53.270 ============================== 00:44:53.270 Admin Commands 00:44:53.270 -------------- 00:44:53.270 Get Log Page (02h): Supported 00:44:53.270 Identify (06h): Supported 00:44:53.270 Abort (08h): Supported 00:44:53.270 Set Features (09h): Supported 00:44:53.270 Get Features (0Ah): Supported 00:44:53.270 Asynchronous Event Request (0Ch): Supported 00:44:53.270 Keep Alive (18h): Supported 00:44:53.270 I/O Commands 00:44:53.270 ------------ 00:44:53.270 Flush (00h): Supported 00:44:53.270 Write (01h): Supported LBA-Change 00:44:53.270 Read (02h): Supported 00:44:53.270 Write Zeroes (08h): Supported LBA-Change 00:44:53.270 Dataset Management (09h): Supported 00:44:53.270 00:44:53.270 Error Log 00:44:53.270 ========= 00:44:53.270 Entry: 0 00:44:53.270 Error Count: 0x3 00:44:53.270 Submission Queue Id: 0x0 00:44:53.270 Command Id: 0x5 00:44:53.270 Phase Bit: 0 00:44:53.270 Status Code: 0x2 00:44:53.270 Status Code Type: 0x0 00:44:53.270 Do Not Retry: 1 00:44:53.270 Error Location: 0x28 00:44:53.270 LBA: 0x0 00:44:53.270 Namespace: 0x0 00:44:53.270 Vendor Log Page: 0x0 00:44:53.270 ----------- 00:44:53.270 Entry: 1 00:44:53.270 Error Count: 0x2 00:44:53.270 Submission Queue Id: 0x0 00:44:53.270 Command Id: 0x5 00:44:53.270 Phase Bit: 0 00:44:53.270 Status Code: 0x2 00:44:53.270 Status Code Type: 0x0 00:44:53.270 Do Not Retry: 1 00:44:53.270 Error Location: 0x28 00:44:53.270 LBA: 0x0 00:44:53.270 Namespace: 0x0 00:44:53.270 Vendor Log Page: 0x0 00:44:53.270 ----------- 00:44:53.270 Entry: 2 00:44:53.270 Error Count: 0x1 00:44:53.271 Submission Queue Id: 0x0 00:44:53.271 Command Id: 0x4 00:44:53.271 Phase Bit: 0 00:44:53.271 Status Code: 0x2 00:44:53.271 Status Code Type: 0x0 00:44:53.271 Do Not Retry: 1 00:44:53.271 Error Location: 0x28 00:44:53.271 LBA: 0x0 00:44:53.271 Namespace: 0x0 00:44:53.271 Vendor Log Page: 0x0 00:44:53.271 00:44:53.271 Number of Queues 00:44:53.271 ================ 00:44:53.271 Number of I/O Submission Queues: 128 00:44:53.271 Number of I/O Completion Queues: 128 00:44:53.271 00:44:53.271 ZNS Specific Controller Data 00:44:53.271 ============================ 00:44:53.271 Zone Append Size Limit: 0 00:44:53.271 00:44:53.271 00:44:53.271 Active Namespaces 00:44:53.271 ================= 00:44:53.271 get_feature(0x05) failed 00:44:53.271 Namespace ID:1 00:44:53.271 Command Set Identifier: NVM (00h) 00:44:53.271 Deallocate: Supported 00:44:53.271 Deallocated/Unwritten Error: Not Supported 00:44:53.271 Deallocated Read Value: Unknown 00:44:53.271 Deallocate in Write Zeroes: Not Supported 00:44:53.271 Deallocated Guard Field: 0xFFFF 00:44:53.271 Flush: Supported 00:44:53.271 Reservation: Not Supported 00:44:53.271 Namespace Sharing Capabilities: Multiple Controllers 00:44:53.271 Size (in LBAs): 3907029168 (1863GiB) 00:44:53.271 Capacity (in LBAs): 3907029168 (1863GiB) 00:44:53.271 Utilization (in LBAs): 3907029168 (1863GiB) 00:44:53.271 UUID: b68d3c41-d1ae-405c-9799-e485e6c39b0b 00:44:53.271 Thin Provisioning: Not Supported 00:44:53.271 Per-NS Atomic Units: Yes 00:44:53.271 Atomic Boundary Size (Normal): 0 00:44:53.271 Atomic Boundary Size (PFail): 0 00:44:53.271 Atomic Boundary Offset: 0 00:44:53.271 NGUID/EUI64 Never Reused: No 00:44:53.271 ANA group ID: 1 00:44:53.271 Namespace Write Protected: No 00:44:53.271 Number of LBA Formats: 1 00:44:53.271 Current LBA Format: LBA Format #00 00:44:53.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:44:53.271 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:53.271 rmmod nvme_tcp 00:44:53.271 rmmod nvme_fabrics 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:53.271 16:58:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:55.193 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:55.194 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:55.194 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:55.194 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:44:55.194 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:44:55.194 16:58:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:56.568 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:56.568 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:56.568 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:58.470 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:44:58.729 00:44:58.729 real 0m11.031s 00:44:58.729 user 0m2.216s 00:44:58.729 sys 0m3.851s 00:44:58.729 16:58:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:44:58.729 16:58:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.729 ************************************ 00:44:58.729 END TEST nvmf_identify_kernel_target 00:44:58.729 ************************************ 00:44:58.729 16:58:18 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:44:58.729 16:58:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:44:58.729 16:58:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:44:58.729 16:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:58.729 ************************************ 00:44:58.729 START TEST nvmf_auth_host 00:44:58.729 ************************************ 00:44:58.729 16:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:44:58.729 * Looking for test storage... 00:44:58.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:44:58.729 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:58.729 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:44:58.730 16:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:45:01.260 Found 0000:82:00.0 (0x8086 - 0x159b) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:45:01.260 Found 0000:82:00.1 (0x8086 - 0x159b) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:45:01.260 Found net devices under 0000:82:00.0: cvl_0_0 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:45:01.260 Found net devices under 0000:82:00.1: cvl_0_1 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:01.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:01.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:45:01.260 00:45:01.260 --- 10.0.0.2 ping statistics --- 00:45:01.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:01.260 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:01.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:01.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:45:01.260 00:45:01.260 --- 10.0.0.1 ping statistics --- 00:45:01.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:01.260 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2946039 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:45:01.260 16:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2946039 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2946039 ']' 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:01.261 16:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa2640d6330178617cba14bbf6d46055 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CTH 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa2640d6330178617cba14bbf6d46055 0 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa2640d6330178617cba14bbf6d46055 0 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa2640d6330178617cba14bbf6d46055 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:45:01.519 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CTH 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CTH 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CTH 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=50e8df3fc93c23d861e761f5e89c420b62469e8ec674631f7da8c2062de44025 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.p0U 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 50e8df3fc93c23d861e761f5e89c420b62469e8ec674631f7da8c2062de44025 3 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 50e8df3fc93c23d861e761f5e89c420b62469e8ec674631f7da8c2062de44025 3 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=50e8df3fc93c23d861e761f5e89c420b62469e8ec674631f7da8c2062de44025 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.p0U 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.p0U 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.p0U 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3b2cc9a48e89a9dc74d5f40547d5d1c1fe70f3283775fa40 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.evJ 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3b2cc9a48e89a9dc74d5f40547d5d1c1fe70f3283775fa40 0 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3b2cc9a48e89a9dc74d5f40547d5d1c1fe70f3283775fa40 0 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3b2cc9a48e89a9dc74d5f40547d5d1c1fe70f3283775fa40 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.evJ 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.evJ 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.evJ 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c4530f4c0e4093b8e938c07131281c0fd2548d25a15a885 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:45:01.788 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o6w 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c4530f4c0e4093b8e938c07131281c0fd2548d25a15a885 2 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c4530f4c0e4093b8e938c07131281c0fd2548d25a15a885 2 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c4530f4c0e4093b8e938c07131281c0fd2548d25a15a885 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o6w 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o6w 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.o6w 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85719ede7706f85adbffa761aa6e3392 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y3d 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85719ede7706f85adbffa761aa6e3392 1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85719ede7706f85adbffa761aa6e3392 1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85719ede7706f85adbffa761aa6e3392 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y3d 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y3d 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.y3d 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d969c921bcbea0e5b787a88e55d6eac 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.i7C 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d969c921bcbea0e5b787a88e55d6eac 1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d969c921bcbea0e5b787a88e55d6eac 1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d969c921bcbea0e5b787a88e55d6eac 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.i7C 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.i7C 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.i7C 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:01.789 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9234224d9fa7d51edff5b51c55e39b09f01f021ea36e56db 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6Oi 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9234224d9fa7d51edff5b51c55e39b09f01f021ea36e56db 2 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9234224d9fa7d51edff5b51c55e39b09f01f021ea36e56db 2 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9234224d9fa7d51edff5b51c55e39b09f01f021ea36e56db 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6Oi 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6Oi 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6Oi 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4fdf57426e02e3fcc1369f2e0236a3a 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cTu 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4fdf57426e02e3fcc1369f2e0236a3a 0 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4fdf57426e02e3fcc1369f2e0236a3a 0 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4fdf57426e02e3fcc1369f2e0236a3a 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cTu 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cTu 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cTu 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a3291b7fe70021ba6d2e080d9598343eb9dc8c652a4982912c9683b11222c12f 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:45:02.047 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NDI 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a3291b7fe70021ba6d2e080d9598343eb9dc8c652a4982912c9683b11222c12f 3 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a3291b7fe70021ba6d2e080d9598343eb9dc8c652a4982912c9683b11222c12f 3 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a3291b7fe70021ba6d2e080d9598343eb9dc8c652a4982912c9683b11222c12f 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NDI 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NDI 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NDI 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2946039 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2946039 ']' 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:02.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:02.048 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CTH 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.p0U ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.p0U 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.evJ 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.o6w ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o6w 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.y3d 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.i7C ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.i7C 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6Oi 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.306 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cTu ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cTu 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NDI 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:02.307 16:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:03.680 Waiting for block devices as requested 00:45:03.680 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:45:03.680 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:03.938 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:03.938 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:04.196 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:04.196 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:04.196 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:04.196 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:04.197 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:04.454 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:04.454 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:04.454 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:04.711 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:04.711 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:04.711 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:04.711 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:04.969 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:05.227 No valid GPT data, bailing 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:45:05.227 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:45:05.228 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:45:05.228 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:45:05.486 00:45:05.486 Discovery Log Number of Records 2, Generation counter 2 00:45:05.486 =====Discovery Log Entry 0====== 00:45:05.486 trtype: tcp 00:45:05.486 adrfam: ipv4 00:45:05.486 subtype: current discovery subsystem 00:45:05.486 treq: not specified, sq flow control disable supported 00:45:05.486 portid: 1 00:45:05.486 trsvcid: 4420 00:45:05.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:05.486 traddr: 10.0.0.1 00:45:05.486 eflags: none 00:45:05.486 sectype: none 00:45:05.486 =====Discovery Log Entry 1====== 00:45:05.486 trtype: tcp 00:45:05.486 adrfam: ipv4 00:45:05.486 subtype: nvme subsystem 00:45:05.486 treq: not specified, sq flow control disable supported 00:45:05.486 portid: 1 00:45:05.486 trsvcid: 4420 00:45:05.486 subnqn: nqn.2024-02.io.spdk:cnode0 00:45:05.486 traddr: 10.0.0.1 00:45:05.486 eflags: none 00:45:05.486 sectype: none 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:05.486 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.487 16:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:05.745 nvme0n1 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.745 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 nvme0n1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 nvme0n1 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.003 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.261 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.262 nvme0n1 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.262 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.521 16:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 nvme0n1 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:06.521 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.779 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.780 nvme0n1 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.780 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.038 nvme0n1 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.038 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:07.295 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.296 nvme0n1 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.296 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:07.553 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.554 16:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.554 nvme0n1 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.554 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.812 nvme0n1 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.812 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.070 nvme0n1 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.070 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.328 16:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.586 nvme0n1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.586 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 nvme0n1 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.152 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.410 nvme0n1 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.410 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.411 16:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.411 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.977 nvme0n1 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.977 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.236 nvme0n1 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.236 16:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.803 nvme0n1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.803 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:11.369 nvme0n1 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:11.369 16:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:11.369 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:11.369 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:11.369 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:11.369 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:11.627 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.193 nvme0n1 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:12.193 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.194 16:58:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.760 nvme0n1 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:12.760 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:13.326 nvme0n1 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.326 16:58:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:14.700 nvme0n1 00:45:14.700 16:58:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.700 16:58:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:14.700 16:58:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:14.700 16:58:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.700 16:58:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:14.701 16:58:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.701 16:58:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:15.635 nvme0n1 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.635 16:58:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:16.568 nvme0n1 00:45:16.568 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.569 16:58:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:17.503 nvme0n1 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:17.503 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.761 16:58:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.695 nvme0n1 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:18.695 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.696 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.954 nvme0n1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.954 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.213 nvme0n1 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.213 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.471 nvme0n1 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.471 16:58:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.729 nvme0n1 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.729 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.730 nvme0n1 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:19.730 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.987 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.246 nvme0n1 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:20.246 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:20.247 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:20.247 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:20.247 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.247 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.505 nvme0n1 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.505 16:58:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.762 nvme0n1 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:45:20.762 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.763 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.021 nvme0n1 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.021 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.279 nvme0n1 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.279 16:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.537 nvme0n1 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:21.537 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.538 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.103 nvme0n1 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:22.103 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.104 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.362 nvme0n1 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.362 16:58:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.620 nvme0n1 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.620 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:22.877 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.878 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.135 nvme0n1 00:45:23.135 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.135 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:23.135 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.135 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.136 16:58:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.701 nvme0n1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.701 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:24.266 nvme0n1 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:24.266 16:58:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:24.267 16:58:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:24.267 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:24.267 16:58:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:24.832 nvme0n1 00:45:24.832 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.089 16:58:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.655 nvme0n1 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:25.655 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:25.656 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:25.656 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:25.656 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:25.656 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:26.222 nvme0n1 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.222 16:58:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:27.596 nvme0n1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:27.596 16:58:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:28.529 nvme0n1 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:28.529 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:28.530 16:58:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:29.464 nvme0n1 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:29.464 16:58:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.464 16:58:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:30.398 nvme0n1 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:30.398 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.656 16:58:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.636 nvme0n1 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:31.636 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.637 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.913 nvme0n1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.913 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 nvme0n1 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 nvme0n1 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:32.172 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.430 16:58:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.430 nvme0n1 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:32.430 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.688 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.689 nvme0n1 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.689 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.947 nvme0n1 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:32.947 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.205 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 nvme0n1 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.470 16:58:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 nvme0n1 00:45:33.470 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.470 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:33.470 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.470 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:33.470 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.734 nvme0n1 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.734 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:33.993 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.994 nvme0n1 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:33.994 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.252 16:58:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.510 nvme0n1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.511 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.078 nvme0n1 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.078 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.336 nvme0n1 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:35.336 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.337 16:58:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.595 nvme0n1 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.595 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.854 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.855 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.114 nvme0n1 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.114 16:58:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.682 nvme0n1 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.682 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.940 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:36.940 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:36.940 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.941 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:37.507 nvme0n1 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.507 16:58:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:37.507 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.508 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.074 nvme0n1 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:38.074 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.075 16:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.641 nvme0n1 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.641 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.899 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:39.466 nvme0n1 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:39.466 16:58:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWEyNjQwZDYzMzAxNzg2MTdjYmExNGJiZjZkNDYwNTVvDxbG: 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTBlOGRmM2ZjOTNjMjNkODYxZTc2MWY1ZTg5YzQyMGI2MjQ2OWU4ZWM2NzQ2MzFmN2RhOGMyMDYyZGU0NDAyNa6/9I0=: 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.466 16:58:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:40.840 nvme0n1 00:45:40.840 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.841 16:59:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:41.776 nvme0n1 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODU3MTllZGU3NzA2Zjg1YWRiZmZhNzYxYWE2ZTMzOTIG5Cqr: 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ5NjljOTIxYmNiZWEwZTViNzg3YTg4ZTU1ZDZlYWPvAdhk: 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.776 16:59:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:42.712 nvme0n1 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTIzNDIyNGQ5ZmE3ZDUxZWRmZjViNTFjNTVlMzliMDlmMDFmMDIxZWEzNmU1NmRijY+G4A==: 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzRmZGY1NzQyNmUwMmUzZmNjMTM2OWYyZTAyMzZhM2HXy7Wq: 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:42.712 16:59:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:44.085 nvme0n1 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTMyOTFiN2ZlNzAwMjFiYTZkMmUwODBkOTU5ODM0M2ViOWRjOGM2NTJhNDk4MjkxMmM5NjgzYjExMjIyYzEyZoHJpyo=: 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.085 16:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.018 nvme0n1 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2IyY2M5YTQ4ZTg5YTlkYzc0ZDVmNDA1NDdkNWQxYzFmZTcwZjMyODM3NzVmYTQwUc8Tpw==: 00:45:45.018 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2M0NTMwZjRjMGU0MDkzYjhlOTM4YzA3MTMxMjgxYzBmZDI1NDhkMjVhMTVhODg1Yf+Z5w==: 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.019 request: 00:45:45.019 { 00:45:45.019 "name": "nvme0", 00:45:45.019 "trtype": "tcp", 00:45:45.019 "traddr": "10.0.0.1", 00:45:45.019 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:45:45.019 "adrfam": "ipv4", 00:45:45.019 "trsvcid": "4420", 00:45:45.019 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:45:45.019 "method": "bdev_nvme_attach_controller", 00:45:45.019 "req_id": 1 00:45:45.019 } 00:45:45.019 Got JSON-RPC error response 00:45:45.019 response: 00:45:45.019 { 00:45:45.019 "code": -5, 00:45:45.019 "message": "Input/output error" 00:45:45.019 } 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.019 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.278 request: 00:45:45.278 { 00:45:45.278 "name": "nvme0", 00:45:45.278 "trtype": "tcp", 00:45:45.278 "traddr": "10.0.0.1", 00:45:45.278 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:45:45.278 "adrfam": "ipv4", 00:45:45.278 "trsvcid": "4420", 00:45:45.278 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:45:45.278 "dhchap_key": "key2", 00:45:45.278 "method": "bdev_nvme_attach_controller", 00:45:45.278 "req_id": 1 00:45:45.278 } 00:45:45.278 Got JSON-RPC error response 00:45:45.278 response: 00:45:45.278 { 00:45:45.278 "code": -5, 00:45:45.278 "message": "Input/output error" 00:45:45.278 } 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:45.278 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:45.279 request: 00:45:45.279 { 00:45:45.279 "name": "nvme0", 00:45:45.279 "trtype": "tcp", 00:45:45.279 "traddr": "10.0.0.1", 00:45:45.279 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:45:45.279 "adrfam": "ipv4", 00:45:45.279 "trsvcid": "4420", 00:45:45.279 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:45:45.279 "dhchap_key": "key1", 00:45:45.279 "dhchap_ctrlr_key": "ckey2", 00:45:45.279 "method": "bdev_nvme_attach_controller", 00:45:45.279 "req_id": 1 00:45:45.279 } 00:45:45.279 Got JSON-RPC error response 00:45:45.279 response: 00:45:45.279 { 00:45:45.279 "code": -5, 00:45:45.279 "message": "Input/output error" 00:45:45.279 } 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:45.279 rmmod nvme_tcp 00:45:45.279 rmmod nvme_fabrics 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2946039 ']' 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2946039 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 2946039 ']' 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 2946039 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2946039 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2946039' 00:45:45.279 killing process with pid 2946039 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 2946039 00:45:45.279 16:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 2946039 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:45.538 16:59:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:45:48.072 16:59:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:49.447 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:49.447 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:49.448 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:49.448 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:51.349 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:45:51.349 16:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CTH /tmp/spdk.key-null.evJ /tmp/spdk.key-sha256.y3d /tmp/spdk.key-sha384.6Oi /tmp/spdk.key-sha512.NDI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:45:51.349 16:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:52.720 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:52.720 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:45:52.720 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:52.720 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:52.720 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:52.720 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:52.720 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:52.720 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:52.720 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:52.720 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:52.720 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:52.720 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:52.720 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:52.720 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:52.720 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:52.720 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:52.720 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:52.720 00:45:52.720 real 0m53.987s 00:45:52.720 user 0m50.282s 00:45:52.720 sys 0m6.541s 00:45:52.720 16:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:45:52.720 16:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:45:52.720 ************************************ 00:45:52.720 END TEST nvmf_auth_host 00:45:52.720 ************************************ 00:45:52.720 16:59:12 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:45:52.720 16:59:12 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:45:52.720 16:59:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:45:52.720 16:59:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:45:52.720 16:59:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:52.720 ************************************ 00:45:52.720 START TEST nvmf_digest 00:45:52.720 ************************************ 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:45:52.720 * Looking for test storage... 00:45:52.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:52.720 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:45:52.721 16:59:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:55.251 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:45:55.252 Found 0000:82:00.0 (0x8086 - 0x159b) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:45:55.252 Found 0000:82:00.1 (0x8086 - 0x159b) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:45:55.252 Found net devices under 0000:82:00.0: cvl_0_0 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:45:55.252 Found net devices under 0000:82:00.1: cvl_0_1 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:55.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:55.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:45:55.252 00:45:55.252 --- 10.0.0.2 ping statistics --- 00:45:55.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.252 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:55.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:55.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:45:55.252 00:45:55.252 --- 10.0.0.1 ping statistics --- 00:45:55.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.252 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:45:55.252 ************************************ 00:45:55.252 START TEST nvmf_digest_clean 00:45:55.252 ************************************ 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:45:55.252 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2956462 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2956462 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2956462 ']' 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:55.253 16:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:45:55.511 [2024-07-22 16:59:14.936005] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:55.511 [2024-07-22 16:59:14.936088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:55.511 EAL: No free 2048 kB hugepages reported on node 1 00:45:55.511 [2024-07-22 16:59:15.015764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:55.511 [2024-07-22 16:59:15.108184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:55.511 [2024-07-22 16:59:15.108233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:55.511 [2024-07-22 16:59:15.108250] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:55.511 [2024-07-22 16:59:15.108264] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:55.511 [2024-07-22 16:59:15.108275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:55.511 [2024-07-22 16:59:15.108305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.442 16:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:45:56.442 null0 00:45:56.442 [2024-07-22 16:59:16.048014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:56.442 [2024-07-22 16:59:16.072228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2956616 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2956616 /var/tmp/bperf.sock 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2956616 ']' 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:56.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:56.443 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:45:56.701 [2024-07-22 16:59:16.120181] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:56.701 [2024-07-22 16:59:16.120253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956616 ] 00:45:56.701 EAL: No free 2048 kB hugepages reported on node 1 00:45:56.701 [2024-07-22 16:59:16.191348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:56.701 [2024-07-22 16:59:16.282246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.701 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:56.701 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:45:56.701 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:45:56.701 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:45:56.701 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:57.266 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:45:57.266 16:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:45:57.523 nvme0n1 00:45:57.523 16:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:45:57.523 16:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:57.781 Running I/O for 2 seconds... 00:45:59.677 00:45:59.677 Latency(us) 00:45:59.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:59.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:59.677 nvme0n1 : 2.00 20452.86 79.89 0.00 0.00 6250.26 3058.35 15631.55 00:45:59.677 =================================================================================================================== 00:45:59.677 Total : 20452.86 79.89 0.00 0.00 6250.26 3058.35 15631.55 00:45:59.677 0 00:45:59.677 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:45:59.677 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:45:59.677 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:45:59.677 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:59.677 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:45:59.677 | select(.opcode=="crc32c") 00:45:59.677 | "\(.module_name) \(.executed)"' 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2956616 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2956616 ']' 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2956616 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:45:59.935 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2956616 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2956616' 00:45:59.936 killing process with pid 2956616 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2956616 00:45:59.936 Received shutdown signal, test time was about 2.000000 seconds 00:45:59.936 00:45:59.936 Latency(us) 00:45:59.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:59.936 =================================================================================================================== 00:45:59.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:59.936 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2956616 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2957102 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2957102 /var/tmp/bperf.sock 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2957102 ']' 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:00.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:00.193 16:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:46:00.193 [2024-07-22 16:59:19.826726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:00.193 [2024-07-22 16:59:19.826806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957102 ] 00:46:00.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:00.193 Zero copy mechanism will not be used. 00:46:00.451 EAL: No free 2048 kB hugepages reported on node 1 00:46:00.451 [2024-07-22 16:59:19.897898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:00.451 [2024-07-22 16:59:19.990265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:00.451 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:00.451 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:46:00.451 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:46:00.451 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:46:00.451 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:01.015 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:01.015 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:01.271 nvme0n1 00:46:01.271 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:46:01.271 16:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:01.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:01.529 Zero copy mechanism will not be used. 00:46:01.529 Running I/O for 2 seconds... 00:46:03.426 00:46:03.426 Latency(us) 00:46:03.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:03.426 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:46:03.426 nvme0n1 : 2.00 3235.92 404.49 0.00 0.00 4940.75 1268.24 12815.93 00:46:03.426 =================================================================================================================== 00:46:03.426 Total : 3235.92 404.49 0.00 0.00 4940.75 1268.24 12815.93 00:46:03.426 0 00:46:03.426 16:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:46:03.426 16:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:46:03.426 16:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:46:03.426 16:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:46:03.426 16:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:46:03.426 | select(.opcode=="crc32c") 00:46:03.426 | "\(.module_name) \(.executed)"' 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2957102 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2957102 ']' 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2957102 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2957102 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2957102' 00:46:03.683 killing process with pid 2957102 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2957102 00:46:03.683 Received shutdown signal, test time was about 2.000000 seconds 00:46:03.683 00:46:03.683 Latency(us) 00:46:03.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:03.683 =================================================================================================================== 00:46:03.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:03.683 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2957102 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2957542 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2957542 /var/tmp/bperf.sock 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2957542 ']' 00:46:03.940 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:03.941 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:03.941 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:03.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:03.941 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:03.941 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:46:03.941 [2024-07-22 16:59:23.511595] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:03.941 [2024-07-22 16:59:23.511677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957542 ] 00:46:03.941 EAL: No free 2048 kB hugepages reported on node 1 00:46:03.941 [2024-07-22 16:59:23.584492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.198 [2024-07-22 16:59:23.675293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:04.199 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:04.199 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:46:04.199 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:46:04.199 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:46:04.199 16:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:04.456 16:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:04.456 16:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:05.021 nvme0n1 00:46:05.021 16:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:46:05.021 16:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:05.021 Running I/O for 2 seconds... 00:46:07.548 00:46:07.548 Latency(us) 00:46:07.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.548 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:46:07.548 nvme0n1 : 2.00 21891.55 85.51 0.00 0.00 5841.12 2852.03 16505.36 00:46:07.548 =================================================================================================================== 00:46:07.548 Total : 21891.55 85.51 0.00 0.00 5841.12 2852.03 16505.36 00:46:07.548 0 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:46:07.548 | select(.opcode=="crc32c") 00:46:07.548 | "\(.module_name) \(.executed)"' 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2957542 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2957542 ']' 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2957542 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2957542 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2957542' 00:46:07.548 killing process with pid 2957542 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2957542 00:46:07.548 Received shutdown signal, test time was about 2.000000 seconds 00:46:07.548 00:46:07.548 Latency(us) 00:46:07.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.548 =================================================================================================================== 00:46:07.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:07.548 16:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2957542 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2957955 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2957955 /var/tmp/bperf.sock 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2957955 ']' 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:07.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:07.548 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:46:07.548 [2024-07-22 16:59:27.175265] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:07.548 [2024-07-22 16:59:27.175364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957955 ] 00:46:07.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:07.548 Zero copy mechanism will not be used. 00:46:07.806 EAL: No free 2048 kB hugepages reported on node 1 00:46:07.806 [2024-07-22 16:59:27.246541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:07.806 [2024-07-22 16:59:27.337054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:07.806 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:07.806 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:46:07.806 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:46:07.806 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:46:07.806 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:08.064 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:08.064 16:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:08.628 nvme0n1 00:46:08.628 16:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:46:08.628 16:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:08.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:08.628 Zero copy mechanism will not be used. 00:46:08.628 Running I/O for 2 seconds... 00:46:10.526 00:46:10.526 Latency(us) 00:46:10.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:10.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:46:10.527 nvme0n1 : 2.00 4509.18 563.65 0.00 0.00 3539.46 2475.80 10048.85 00:46:10.527 =================================================================================================================== 00:46:10.527 Total : 4509.18 563.65 0.00 0.00 3539.46 2475.80 10048.85 00:46:10.527 0 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:46:10.783 | select(.opcode=="crc32c") 00:46:10.783 | "\(.module_name) \(.executed)"' 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:46:10.783 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2957955 00:46:10.784 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2957955 ']' 00:46:10.784 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2957955 00:46:10.784 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:46:10.784 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:10.784 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2957955 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2957955' 00:46:11.046 killing process with pid 2957955 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2957955 00:46:11.046 Received shutdown signal, test time was about 2.000000 seconds 00:46:11.046 00:46:11.046 Latency(us) 00:46:11.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:11.046 =================================================================================================================== 00:46:11.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2957955 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2956462 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2956462 ']' 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2956462 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:11.046 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2956462 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2956462' 00:46:11.337 killing process with pid 2956462 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2956462 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2956462 00:46:11.337 00:46:11.337 real 0m16.067s 00:46:11.337 user 0m30.538s 00:46:11.337 sys 0m4.972s 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:11.337 16:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:46:11.337 ************************************ 00:46:11.337 END TEST nvmf_digest_clean 00:46:11.337 ************************************ 00:46:11.612 16:59:30 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:46:11.612 16:59:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:46:11.612 16:59:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:11.612 16:59:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:46:11.612 ************************************ 00:46:11.612 START TEST nvmf_digest_error 00:46:11.612 ************************************ 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2958384 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2958384 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2958384 ']' 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:11.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:11.612 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.612 [2024-07-22 16:59:31.060688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:11.612 [2024-07-22 16:59:31.060779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:11.612 EAL: No free 2048 kB hugepages reported on node 1 00:46:11.612 [2024-07-22 16:59:31.142402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:11.612 [2024-07-22 16:59:31.234062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:11.612 [2024-07-22 16:59:31.234107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:11.612 [2024-07-22 16:59:31.234122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:11.612 [2024-07-22 16:59:31.234135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:11.612 [2024-07-22 16:59:31.234147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:11.612 [2024-07-22 16:59:31.234175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.872 [2024-07-22 16:59:31.306792] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.872 null0 00:46:11.872 [2024-07-22 16:59:31.428913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:11.872 [2024-07-22 16:59:31.453126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2958533 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2958533 /var/tmp/bperf.sock 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2958533 ']' 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:11.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:11.872 16:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:11.872 [2024-07-22 16:59:31.496901] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:11.872 [2024-07-22 16:59:31.497028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958533 ] 00:46:12.129 EAL: No free 2048 kB hugepages reported on node 1 00:46:12.129 [2024-07-22 16:59:31.583253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:12.129 [2024-07-22 16:59:31.680349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:13.060 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:13.060 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:46:13.060 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:13.060 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:13.317 16:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:13.574 nvme0n1 00:46:13.574 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:46:13.575 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:13.575 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:13.575 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:13.575 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:46:13.575 16:59:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:13.575 Running I/O for 2 seconds... 00:46:13.575 [2024-07-22 16:59:33.172664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.575 [2024-07-22 16:59:33.172717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.575 [2024-07-22 16:59:33.172741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.575 [2024-07-22 16:59:33.190112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.575 [2024-07-22 16:59:33.190144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.575 [2024-07-22 16:59:33.190160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.575 [2024-07-22 16:59:33.205844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.575 [2024-07-22 16:59:33.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.575 [2024-07-22 16:59:33.205900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.575 [2024-07-22 16:59:33.218576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.575 [2024-07-22 16:59:33.218612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.575 [2024-07-22 16:59:33.218631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.832 [2024-07-22 16:59:33.234389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.832 [2024-07-22 16:59:33.234427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.832 [2024-07-22 16:59:33.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.250176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.250206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.250222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.261409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.261444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.261463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.276826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.276880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.290101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.290131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.290147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.302918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.302954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.302983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.316383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.316418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.316444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.330228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.330273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.330290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.342875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.342910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.342929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.357243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.357272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.357289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.370921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.370957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.370986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.382982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.383054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.396720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.396750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.396766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.411230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.411275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.411291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.423186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.423216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.423233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.437091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.437129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.437147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.453671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.453707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.453728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.464938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.464982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.465003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:13.833 [2024-07-22 16:59:33.480443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:13.833 [2024-07-22 16:59:33.480479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:13.833 [2024-07-22 16:59:33.480498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.493626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.493661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.493681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.506642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.506679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.506699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.520320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.520375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.532222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.532251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.532267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.545869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.545902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.545921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.563752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.563786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.563805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.581104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.581149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.593246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.593275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.593307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.609322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.609356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.609375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.625479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.625524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.625543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.637615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.637649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.637669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.651454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.651489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.651508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.666149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.666195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.680812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.680871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.697747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.697775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.697792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.707551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.707608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.721574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.721604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.721621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.091 [2024-07-22 16:59:33.731368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.091 [2024-07-22 16:59:33.731397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.091 [2024-07-22 16:59:33.731413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.349 [2024-07-22 16:59:33.746101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.349 [2024-07-22 16:59:33.746134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.349 [2024-07-22 16:59:33.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.349 [2024-07-22 16:59:33.758587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.349 [2024-07-22 16:59:33.758618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.349 [2024-07-22 16:59:33.758634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.349 [2024-07-22 16:59:33.770360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.349 [2024-07-22 16:59:33.770389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.770405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.781739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.781784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.794149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.794179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.794195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.805493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.805521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.805537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.818793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.818822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.818838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.828519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.841926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.841979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.841998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.855123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.855154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.866869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.866899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.866915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.877265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.877310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.877325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.889829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.889857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.889879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.901188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.901217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.901232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.916094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.916139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.929588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.929617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.929632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.941622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.941651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.941667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.953681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.953711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.966600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.966629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.966644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.977090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.977119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.977136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.350 [2024-07-22 16:59:33.989014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.350 [2024-07-22 16:59:33.989044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.350 [2024-07-22 16:59:33.989060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.002704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.002739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.002757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.016198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.016244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.027458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.027504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.041335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.041364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.053857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.053885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.053901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.608 [2024-07-22 16:59:34.064417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.608 [2024-07-22 16:59:34.064445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.608 [2024-07-22 16:59:34.064461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.077202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.077232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.077262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.089370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.089400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.089416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.101930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.101982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.102001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.115109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.115140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.115157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.125549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.125578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.139340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.139369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.139385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.151354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.151382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.163111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.163141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.163164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.175201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.175231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.186982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.187037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.187055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.198855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.198884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.198900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.209674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.209703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.209725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.222477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.222505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.222521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.234494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.234522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.234538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.609 [2024-07-22 16:59:34.247406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.609 [2024-07-22 16:59:34.247434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.609 [2024-07-22 16:59:34.247450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.258037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.258066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.258083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.273120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.273149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.273165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.283862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.283890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.283906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.296350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.296379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.296395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.308664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.308693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.308709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.320618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.320648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.320664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.332369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.332397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.332412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.344367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.344396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.344411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.355736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.355765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.355781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.368520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.368549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.368565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.381286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.381315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.381330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.391403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.391432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.391447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.404392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.404420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.404436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.867 [2024-07-22 16:59:34.417190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.867 [2024-07-22 16:59:34.417220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.867 [2024-07-22 16:59:34.417242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.428871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.428905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.428924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.443025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.443053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.443069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.458279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.458314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.458333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.469907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.469941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.469961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.487121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.487149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.487165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:14.868 [2024-07-22 16:59:34.504338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:14.868 [2024-07-22 16:59:34.504372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:14.868 [2024-07-22 16:59:34.504391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.521516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.521571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.533162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.533191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.533207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.550536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.550577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.550596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.566924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.566959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.566989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.582181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.582210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.582226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.594109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.594138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.594153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.611106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.125 [2024-07-22 16:59:34.611135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.125 [2024-07-22 16:59:34.611152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.125 [2024-07-22 16:59:34.624555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.624589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.624608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.636091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.636121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.636137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.651344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.651378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.651397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.667205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.667265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.679762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.679815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.697046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.697074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.697090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.708872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.708906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.708926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.724643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.724677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.724696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.739069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.739097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.739112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.752250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.752310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.126 [2024-07-22 16:59:34.764909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.126 [2024-07-22 16:59:34.764943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.126 [2024-07-22 16:59:34.764961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.383 [2024-07-22 16:59:34.778340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.383 [2024-07-22 16:59:34.778373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.383 [2024-07-22 16:59:34.778393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.383 [2024-07-22 16:59:34.792086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.383 [2024-07-22 16:59:34.792115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.792136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.805174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.805204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.819162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.819190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.831736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.831769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.831789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.845037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.845065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.845081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.859893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.859927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.859945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.870690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.870724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.870743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.885734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.901176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.901204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.901227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.914151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.914184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.929018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.929073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.941420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.941454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.957995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.958039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.958054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.969524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.969558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:34.987257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:34.987299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:34.987315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:35.004726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:35.004761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:35.004780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:35.016759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:35.016793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:35.016812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.384 [2024-07-22 16:59:35.031261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.384 [2024-07-22 16:59:35.031291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.384 [2024-07-22 16:59:35.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.049211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.049247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.049264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.063735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.063769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.063789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.075778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.075813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.075833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.088801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.088836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.103612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.103666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.116956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.117012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.117029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.131871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.143807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.143840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.143859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 [2024-07-22 16:59:35.159463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17c5840) 00:46:15.642 [2024-07-22 16:59:35.159508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:15.642 [2024-07-22 16:59:35.159528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:15.642 00:46:15.642 Latency(us) 00:46:15.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:15.642 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:15.642 nvme0n1 : 2.01 18981.58 74.15 0.00 0.00 6735.32 2936.98 24466.77 00:46:15.642 =================================================================================================================== 00:46:15.642 Total : 18981.58 74.15 0.00 0.00 6735.32 2936.98 24466.77 00:46:15.642 0 00:46:15.642 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:46:15.642 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:46:15.642 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:46:15.642 | .driver_specific 00:46:15.642 | .nvme_error 00:46:15.642 | .status_code 00:46:15.642 | .command_transient_transport_error' 00:46:15.642 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2958533 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2958533 ']' 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2958533 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2958533 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2958533' 00:46:15.900 killing process with pid 2958533 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2958533 00:46:15.900 Received shutdown signal, test time was about 2.000000 seconds 00:46:15.900 00:46:15.900 Latency(us) 00:46:15.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:15.900 =================================================================================================================== 00:46:15.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:15.900 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2958533 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2958960 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2958960 /var/tmp/bperf.sock 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2958960 ']' 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:16.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:16.158 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:16.158 [2024-07-22 16:59:35.720263] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:16.158 [2024-07-22 16:59:35.720348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958960 ] 00:46:16.158 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:16.158 Zero copy mechanism will not be used. 00:46:16.158 EAL: No free 2048 kB hugepages reported on node 1 00:46:16.158 [2024-07-22 16:59:35.791397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.416 [2024-07-22 16:59:35.883202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:16.416 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:16.416 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:46:16.416 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:16.416 16:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:16.674 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:16.931 nvme0n1 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:46:17.189 16:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:17.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:17.189 Zero copy mechanism will not be used. 00:46:17.189 Running I/O for 2 seconds... 00:46:17.189 [2024-07-22 16:59:36.705168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.705231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.705252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.712943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.712990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.713031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.720927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.721010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.728604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.728637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.728666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.736756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.736796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.736825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.744542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.744575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.744594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.752364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.752397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.752417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.760235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.760285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.760299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.768106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.768134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.768155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.776256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.776291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.776310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.784579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.784631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.792399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.792433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.792452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.800048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.800077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.800093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.807958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.808013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.808031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.815953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.816013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.816031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.189 [2024-07-22 16:59:36.823851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.189 [2024-07-22 16:59:36.823885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.189 [2024-07-22 16:59:36.823904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.190 [2024-07-22 16:59:36.831743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.190 [2024-07-22 16:59:36.831777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.190 [2024-07-22 16:59:36.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.839641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.839679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.847343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.847375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.847394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.855324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.855365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.855384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.863118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.863149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.863166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.870973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.871006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.878753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.878786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.878806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.886652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.886685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.886704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.894544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.894578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.894597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.902302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.902348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.902366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.910053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.910083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.910099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.917736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.917770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.917789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.925504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.925537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.925556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.933340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.933373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.933393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.941196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.941225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.941243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.949401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.949435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.949454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.957573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.957614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.966009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.966053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.966072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.974012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.974040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.974066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.981798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.981839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.981857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.989937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.989980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.990018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:36.998127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:36.998163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:36.998178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.006224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.006252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.006267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.014109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.014136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.014152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.022110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.022139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.022159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.030184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.030223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.030239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.038258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.038310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.038325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.046565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.046616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.046635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.054580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.054614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.054633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.062528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.070399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.070432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.070452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.078261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.078308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.448 [2024-07-22 16:59:37.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.448 [2024-07-22 16:59:37.086150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.448 [2024-07-22 16:59:37.086178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.449 [2024-07-22 16:59:37.086194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.449 [2024-07-22 16:59:37.094023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.449 [2024-07-22 16:59:37.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.449 [2024-07-22 16:59:37.094071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.102182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.102210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.102227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.110602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.110637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.110657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.120102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.120134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.120150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.129821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.129854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.129873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.139470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.139504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.139523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.150194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.150222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.150238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.707 [2024-07-22 16:59:37.161045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.707 [2024-07-22 16:59:37.161072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.707 [2024-07-22 16:59:37.161087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.172739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.172774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.172793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.181190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.181218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.189458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.189491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.189510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.197444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.197477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.197503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.205546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.205580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.205600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.213794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.213828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.213846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.222564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.222597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.222617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.231396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.231429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.231448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.240190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.240218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.249422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.249457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.258586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.258620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.266742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.266794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.274847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.274879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.274899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.282538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.282570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.282597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.290775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.290809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.290828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.299842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.299874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.308056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.308083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.308099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.317398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.317431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.317449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.326552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.326585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.326604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.336410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.336445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.336464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.345219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.345268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.708 [2024-07-22 16:59:37.355769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.708 [2024-07-22 16:59:37.355802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.708 [2024-07-22 16:59:37.355821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.367221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.367249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.367264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.379121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.379148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.391087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.391114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.391130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.403222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.403249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.403282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.415236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.415283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.415301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.427348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.427380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.427399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.440034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.440063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.440079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.452598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.452639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.452659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.465484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.465517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.465536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.479147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.479174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.479190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.492531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.492563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.492582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.506283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.506316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.506334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.519677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.519711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.519730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.533688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.533721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.533740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.546953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.966 [2024-07-22 16:59:37.546995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.966 [2024-07-22 16:59:37.547015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.966 [2024-07-22 16:59:37.561082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.967 [2024-07-22 16:59:37.561109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.967 [2024-07-22 16:59:37.561124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:17.967 [2024-07-22 16:59:37.574864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.967 [2024-07-22 16:59:37.574897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.967 [2024-07-22 16:59:37.574916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:17.967 [2024-07-22 16:59:37.588070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.967 [2024-07-22 16:59:37.588097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.967 [2024-07-22 16:59:37.588112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:17.967 [2024-07-22 16:59:37.601595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.967 [2024-07-22 16:59:37.601628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.967 [2024-07-22 16:59:37.601646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:17.967 [2024-07-22 16:59:37.615183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:17.967 [2024-07-22 16:59:37.615212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:17.967 [2024-07-22 16:59:37.615228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.628659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.628693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.628712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.642873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.642909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.642928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.657043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.657071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.670822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.670855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.670874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.684978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.685023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.685044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.699329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.699362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.699380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.713170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.713198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.713214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.727088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.727116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.727132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.740456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.740491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.753820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.753855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.753874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.766531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.766566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.766585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.779219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.779249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.779265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.792086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.792116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.792132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.803814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.803849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.803869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.816170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.816217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.827156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.827184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.827201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.838266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.838301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.225 [2024-07-22 16:59:37.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.225 [2024-07-22 16:59:37.848531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.225 [2024-07-22 16:59:37.848565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.226 [2024-07-22 16:59:37.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.226 [2024-07-22 16:59:37.859887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.226 [2024-07-22 16:59:37.859921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.226 [2024-07-22 16:59:37.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.226 [2024-07-22 16:59:37.870813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.226 [2024-07-22 16:59:37.870847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.226 [2024-07-22 16:59:37.870867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.881993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.882035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.882051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.892991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.893056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.903077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.903105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.903121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.913694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.913729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.913747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.924694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.924729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.924748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.935153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.935182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.935198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.944595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.944628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.944648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.953114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.953143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.953160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.961615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.961647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.961666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.969993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.970040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.970057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.978570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.978608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.978628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.988018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.988048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.988065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:37.996485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:37.996518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:37.996537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:38.005202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:38.005231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:38.005262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.484 [2024-07-22 16:59:38.013830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.484 [2024-07-22 16:59:38.013863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.484 [2024-07-22 16:59:38.013881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.022456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.022489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.022507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.030983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.031029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.031046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.039591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.039624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.039643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.048137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.048165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.048181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.056801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.056834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.056853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.065208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.065236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.065269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.073931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.073972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.074008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.082109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.082154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.090423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.090456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.090475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.099017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.099062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.107433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.107465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.107483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.116539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.116592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.485 [2024-07-22 16:59:38.125666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.485 [2024-07-22 16:59:38.125701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.485 [2024-07-22 16:59:38.125726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.134122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.134154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.134171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.142541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.142573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.142592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.150719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.150752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.150771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.158894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.158927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.158945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.167016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.167053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.167069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.175100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.175127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.175143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.183223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.183267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.183285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.191864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.191898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.191917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.200009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.200041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.200057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.208160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.208188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.208203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.216264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.216307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.216327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.224497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.743 [2024-07-22 16:59:38.224529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.743 [2024-07-22 16:59:38.224548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.743 [2024-07-22 16:59:38.232506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.232538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.232557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.240587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.240619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.240637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.248730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.248761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.248779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.256861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.256892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.256910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.265008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.265052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.265067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.273227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.273254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.273269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.281424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.281455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.281473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.289565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.289596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.289615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.297750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.297781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.305895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.305926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.305945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.314047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.314072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.314087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.322096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.322121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.322137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.330252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.330285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.330305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.338321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.338357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.338378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.346944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.346992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.347024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.354951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.354994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.355013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.362985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.363045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.370977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.371022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.371040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.379479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.379518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.379536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:18.744 [2024-07-22 16:59:38.387527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:18.744 [2024-07-22 16:59:38.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:18.744 [2024-07-22 16:59:38.387579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.396069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.396099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.396116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.404026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.404053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.404067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.411728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.411761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.411790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.419451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.419483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.419503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.427171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.427198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.427219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.435025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.435072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.442693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.442725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.450357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.450389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.450408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.458046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.458075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.458094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.465696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.465729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.465747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.473680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.473713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.473743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.481801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.481833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.481853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.489684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.489717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.489735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.497492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.497524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.497542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.505497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.505534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.505553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.513415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.513447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.513467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.003 [2024-07-22 16:59:38.521340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.003 [2024-07-22 16:59:38.521372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.003 [2024-07-22 16:59:38.521390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.529164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.529203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.529219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.538158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.538194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.538209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.546651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.546691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.546711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.554923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.554977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.555015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.562876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.562909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.562927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.570673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.570705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.570725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.578490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.578522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.578540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.586339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.586372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.586391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.594362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.594400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.594419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.602605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.602643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.602662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.610951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.611007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.611025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.619164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.619198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.619214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.626869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.626900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.634748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.634780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.634798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.642524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.642556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.642574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.004 [2024-07-22 16:59:38.651686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.004 [2024-07-22 16:59:38.651720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.004 [2024-07-22 16:59:38.651739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.659686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.659719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.668020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.668047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.668069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.676028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.676058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.676073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.683891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.683923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.683949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.692204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.692231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.692263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:19.262 [2024-07-22 16:59:38.700299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf22c0) 00:46:19.262 [2024-07-22 16:59:38.700335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:19.262 [2024-07-22 16:59:38.700355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:19.262 00:46:19.262 Latency(us) 00:46:19.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:19.262 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:46:19.262 nvme0n1 : 2.00 3373.87 421.73 0.00 0.00 4736.70 3689.43 14369.37 00:46:19.262 =================================================================================================================== 00:46:19.262 Total : 3373.87 421.73 0.00 0.00 4736.70 3689.43 14369.37 00:46:19.262 0 00:46:19.262 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:46:19.263 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:46:19.263 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:46:19.263 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:46:19.263 | .driver_specific 00:46:19.263 | .nvme_error 00:46:19.263 | .status_code 00:46:19.263 | .command_transient_transport_error' 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2958960 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2958960 ']' 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2958960 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2958960 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2958960' 00:46:19.521 killing process with pid 2958960 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2958960 00:46:19.521 Received shutdown signal, test time was about 2.000000 seconds 00:46:19.521 00:46:19.521 Latency(us) 00:46:19.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:19.521 =================================================================================================================== 00:46:19.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:19.521 16:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2958960 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959460 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959460 /var/tmp/bperf.sock 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2959460 ']' 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:19.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:19.780 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:19.780 [2024-07-22 16:59:39.229828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:19.780 [2024-07-22 16:59:39.229920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959460 ] 00:46:19.780 EAL: No free 2048 kB hugepages reported on node 1 00:46:19.780 [2024-07-22 16:59:39.298843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.780 [2024-07-22 16:59:39.388577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:20.038 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:20.038 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:46:20.038 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:20.038 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:20.295 16:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:20.861 nvme0n1 00:46:20.861 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:46:20.861 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:20.861 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:20.861 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:20.861 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:46:20.862 16:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:20.862 Running I/O for 2 seconds... 00:46:20.862 [2024-07-22 16:59:40.453323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fa3a0 00:46:20.862 [2024-07-22 16:59:40.454344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:20.862 [2024-07-22 16:59:40.454383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:46:20.862 [2024-07-22 16:59:40.464876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eaab8 00:46:20.862 [2024-07-22 16:59:40.465898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:20.862 [2024-07-22 16:59:40.465926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:20.862 [2024-07-22 16:59:40.476454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f7da8 00:46:20.862 [2024-07-22 16:59:40.477454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:20.862 [2024-07-22 16:59:40.477482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:46:20.862 [2024-07-22 16:59:40.487046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ee190 00:46:20.862 [2024-07-22 16:59:40.488031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:20.862 [2024-07-22 16:59:40.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:46:20.862 [2024-07-22 16:59:40.498541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f35f0 00:46:20.862 [2024-07-22 16:59:40.499568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:20.862 [2024-07-22 16:59:40.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:46:20.862 [2024-07-22 16:59:40.510460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eb760 00:46:21.120 [2024-07-22 16:59:40.511538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.511568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.522291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f57b0 00:46:21.120 [2024-07-22 16:59:40.523317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.523344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.533417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e84c0 00:46:21.120 [2024-07-22 16:59:40.534430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.534464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.544569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fb8b8 00:46:21.120 [2024-07-22 16:59:40.545622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.545649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.555681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fa7d8 00:46:21.120 [2024-07-22 16:59:40.556693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.556720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.566817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f96f8 00:46:21.120 [2024-07-22 16:59:40.567948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.568000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.578037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ebb98 00:46:21.120 [2024-07-22 16:59:40.579046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.579076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.589159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e2c28 00:46:21.120 [2024-07-22 16:59:40.590187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.590214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.600371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e3d08 00:46:21.120 [2024-07-22 16:59:40.601382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.601408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.611500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e4de8 00:46:21.120 [2024-07-22 16:59:40.612535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.120 [2024-07-22 16:59:40.612561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.120 [2024-07-22 16:59:40.622663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e5ec8 00:46:21.121 [2024-07-22 16:59:40.623721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.623748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.635274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f0bc0 00:46:21.121 [2024-07-22 16:59:40.636812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.636838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.646858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ecc78 00:46:21.121 [2024-07-22 16:59:40.648580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.648607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.658403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190feb58 00:46:21.121 [2024-07-22 16:59:40.660259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.660300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.666201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e99d8 00:46:21.121 [2024-07-22 16:59:40.666977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.667018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.679184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f8618 00:46:21.121 [2024-07-22 16:59:40.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.680203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.689919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ff3c8 00:46:21.121 [2024-07-22 16:59:40.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.700471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ebb98 00:46:21.121 [2024-07-22 16:59:40.701267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.701309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.711908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc128 00:46:21.121 [2024-07-22 16:59:40.712843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.712885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.723685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1f80 00:46:21.121 [2024-07-22 16:59:40.724749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.724782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.735215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f1868 00:46:21.121 [2024-07-22 16:59:40.736453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.736480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.746779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e27f0 00:46:21.121 [2024-07-22 16:59:40.748300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.748328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.756211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6cc8 00:46:21.121 [2024-07-22 16:59:40.757107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.757135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.121 [2024-07-22 16:59:40.767946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f2d80 00:46:21.121 [2024-07-22 16:59:40.768899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.121 [2024-07-22 16:59:40.768928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.779907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190dfdc0 00:46:21.379 [2024-07-22 16:59:40.780847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.780875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.791587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e0ea0 00:46:21.379 [2024-07-22 16:59:40.792483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.803162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc998 00:46:21.379 [2024-07-22 16:59:40.804022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.814518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f3a28 00:46:21.379 [2024-07-22 16:59:40.815386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.815413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.825654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e88f8 00:46:21.379 [2024-07-22 16:59:40.826547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.826573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.836827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190de470 00:46:21.379 [2024-07-22 16:59:40.837742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.837768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.848063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4f40 00:46:21.379 [2024-07-22 16:59:40.848906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.859270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ddc00 00:46:21.379 [2024-07-22 16:59:40.860112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.860139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.870513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc128 00:46:21.379 [2024-07-22 16:59:40.871418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.871455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.881735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6300 00:46:21.379 [2024-07-22 16:59:40.882602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.882629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.892859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fef90 00:46:21.379 [2024-07-22 16:59:40.893739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.893765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.904084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e73e0 00:46:21.379 [2024-07-22 16:59:40.904937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.904985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.915215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f0bc0 00:46:21.379 [2024-07-22 16:59:40.916063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.916089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.926426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f8a50 00:46:21.379 [2024-07-22 16:59:40.927284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.927324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.937603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f7970 00:46:21.379 [2024-07-22 16:59:40.938479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.938505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.948851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f31b8 00:46:21.379 [2024-07-22 16:59:40.949784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.379 [2024-07-22 16:59:40.949810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.379 [2024-07-22 16:59:40.960156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f20d8 00:46:21.379 [2024-07-22 16:59:40.961009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:40.961036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:40.971306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e0a68 00:46:21.380 [2024-07-22 16:59:40.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:40.972192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:40.982701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1b48 00:46:21.380 [2024-07-22 16:59:40.983647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:40.983673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:40.993930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ef6a8 00:46:21.380 [2024-07-22 16:59:40.994822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:40.994848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:41.005122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f46d0 00:46:21.380 [2024-07-22 16:59:41.005988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:41.006015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:41.016268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e95a0 00:46:21.380 [2024-07-22 16:59:41.017161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.380 [2024-07-22 16:59:41.017189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.380 [2024-07-22 16:59:41.027833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190df118 00:46:21.638 [2024-07-22 16:59:41.028763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.039382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f5378 00:46:21.638 [2024-07-22 16:59:41.040245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.040286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.050468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e8088 00:46:21.638 [2024-07-22 16:59:41.051334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.051360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.061551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f1ca0 00:46:21.638 [2024-07-22 16:59:41.062406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.062432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.072654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6020 00:46:21.638 [2024-07-22 16:59:41.073531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.083731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e7818 00:46:21.638 [2024-07-22 16:59:41.084609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.084635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.095058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f0ff8 00:46:21.638 [2024-07-22 16:59:41.095909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.106191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eff18 00:46:21.638 [2024-07-22 16:59:41.107016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.117318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f7da8 00:46:21.638 [2024-07-22 16:59:41.118174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.118207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.128505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6cc8 00:46:21.638 [2024-07-22 16:59:41.129361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.129387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.139580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f2d80 00:46:21.638 [2024-07-22 16:59:41.140475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.140501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.150724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190dfdc0 00:46:21.638 [2024-07-22 16:59:41.151602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.151629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.163308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e0ea0 00:46:21.638 [2024-07-22 16:59:41.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.164775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.173718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ec408 00:46:21.638 [2024-07-22 16:59:41.174739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.184746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ed4e8 00:46:21.638 [2024-07-22 16:59:41.185751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.185778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.196018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ebfd0 00:46:21.638 [2024-07-22 16:59:41.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.197024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.207168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e27f0 00:46:21.638 [2024-07-22 16:59:41.208191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.638 [2024-07-22 16:59:41.208219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.638 [2024-07-22 16:59:41.218356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e38d0 00:46:21.638 [2024-07-22 16:59:41.219409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.219436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.229617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f35f0 00:46:21.639 [2024-07-22 16:59:41.230638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.230664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.240962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f2510 00:46:21.639 [2024-07-22 16:59:41.241975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.242002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.252141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e0630 00:46:21.639 [2024-07-22 16:59:41.253131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.253158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.263250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1710 00:46:21.639 [2024-07-22 16:59:41.264293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.264319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.274434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190efae0 00:46:21.639 [2024-07-22 16:59:41.275472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.275498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.639 [2024-07-22 16:59:41.285835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fdeb0 00:46:21.639 [2024-07-22 16:59:41.286870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.639 [2024-07-22 16:59:41.286898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.297427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190df550 00:46:21.897 [2024-07-22 16:59:41.298426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.298453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.308548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eee38 00:46:21.897 [2024-07-22 16:59:41.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.319651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190edd58 00:46:21.897 [2024-07-22 16:59:41.320700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.330782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e5658 00:46:21.897 [2024-07-22 16:59:41.331839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.331865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.341907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e4578 00:46:21.897 [2024-07-22 16:59:41.342925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.342951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.353004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fa3a0 00:46:21.897 [2024-07-22 16:59:41.354000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.354028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.364109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ed0b0 00:46:21.897 [2024-07-22 16:59:41.365075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.897 [2024-07-22 16:59:41.365102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.897 [2024-07-22 16:59:41.375226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f8e88 00:46:21.898 [2024-07-22 16:59:41.376193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.376220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.386662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6300 00:46:21.898 [2024-07-22 16:59:41.387438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.387464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.398445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190efae0 00:46:21.898 [2024-07-22 16:59:41.399497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.409656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6890 00:46:21.898 [2024-07-22 16:59:41.411243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.411292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.419274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6020 00:46:21.898 [2024-07-22 16:59:41.420021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.420047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.431018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1f80 00:46:21.898 [2024-07-22 16:59:41.431927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.431975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.442675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6300 00:46:21.898 [2024-07-22 16:59:41.443772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.443798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.454416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc560 00:46:21.898 [2024-07-22 16:59:41.455613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.455639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.466140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4298 00:46:21.898 [2024-07-22 16:59:41.467517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.467545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.477759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1f80 00:46:21.898 [2024-07-22 16:59:41.479296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.489445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fac10 00:46:21.898 [2024-07-22 16:59:41.491154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.491195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.501524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eff18 00:46:21.898 [2024-07-22 16:59:41.503460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.503487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.509603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fdeb0 00:46:21.898 [2024-07-22 16:59:41.510426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.510452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.521541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f1430 00:46:21.898 [2024-07-22 16:59:41.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.522536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.532331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ebb98 00:46:21.898 [2024-07-22 16:59:41.533237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.533287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:21.898 [2024-07-22 16:59:41.545111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f31b8 00:46:21.898 [2024-07-22 16:59:41.546414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:21.898 [2024-07-22 16:59:41.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.558542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ef270 00:46:22.156 [2024-07-22 16:59:41.560043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.560069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.570409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eee38 00:46:22.156 [2024-07-22 16:59:41.571720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.571752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.583502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eea00 00:46:22.156 [2024-07-22 16:59:41.584805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.584837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.596216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ed920 00:46:22.156 [2024-07-22 16:59:41.597534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.597565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.608961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eaef0 00:46:22.156 [2024-07-22 16:59:41.610314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.610358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.621714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e4140 00:46:22.156 [2024-07-22 16:59:41.623030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.623056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.634348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fda78 00:46:22.156 [2024-07-22 16:59:41.635661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.635692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.647047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fb8b8 00:46:22.156 [2024-07-22 16:59:41.648345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.156 [2024-07-22 16:59:41.648377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.156 [2024-07-22 16:59:41.659657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4298 00:46:22.156 [2024-07-22 16:59:41.660945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.660986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.672268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1710 00:46:22.157 [2024-07-22 16:59:41.673568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.673600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.685010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190efae0 00:46:22.157 [2024-07-22 16:59:41.686304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.686347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.697626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4f40 00:46:22.157 [2024-07-22 16:59:41.698939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.698977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.710290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190de470 00:46:22.157 [2024-07-22 16:59:41.711602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.711634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.723063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc560 00:46:22.157 [2024-07-22 16:59:41.724323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.724361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.735680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190df988 00:46:22.157 [2024-07-22 16:59:41.736970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.737014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.748262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6b70 00:46:22.157 [2024-07-22 16:59:41.749585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.749617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.761104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e27f0 00:46:22.157 [2024-07-22 16:59:41.762380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.773755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e5658 00:46:22.157 [2024-07-22 16:59:41.775037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.775064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.788022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190edd58 00:46:22.157 [2024-07-22 16:59:41.789919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.789950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.157 [2024-07-22 16:59:41.801176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ddc00 00:46:22.157 [2024-07-22 16:59:41.803300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.157 [2024-07-22 16:59:41.803331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.810137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f7538 00:46:22.424 [2024-07-22 16:59:41.810907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.810933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.823199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fb048 00:46:22.424 [2024-07-22 16:59:41.824346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.824378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.836168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e1f80 00:46:22.424 [2024-07-22 16:59:41.837346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.837378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.848953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e38d0 00:46:22.424 [2024-07-22 16:59:41.850115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.850143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.861694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f9b30 00:46:22.424 [2024-07-22 16:59:41.862855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.862886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.873549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fe2e8 00:46:22.424 [2024-07-22 16:59:41.874637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.874668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.886748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6300 00:46:22.424 [2024-07-22 16:59:41.887987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.899882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e6b70 00:46:22.424 [2024-07-22 16:59:41.901230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.901256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.913071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e7c50 00:46:22.424 [2024-07-22 16:59:41.914672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.914704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.925036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fe2e8 00:46:22.424 [2024-07-22 16:59:41.926123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.926151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.939210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f31b8 00:46:22.424 [2024-07-22 16:59:41.941027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.941053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.952510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190df550 00:46:22.424 [2024-07-22 16:59:41.954452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.954485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.964240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e38d0 00:46:22.424 [2024-07-22 16:59:41.965704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.965735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.975830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e3498 00:46:22.424 [2024-07-22 16:59:41.977643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.977676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:41.987464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fac10 00:46:22.424 [2024-07-22 16:59:41.988395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:41.988427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.000521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f96f8 00:46:22.424 [2024-07-22 16:59:42.001632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.001663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.012419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f9b30 00:46:22.424 [2024-07-22 16:59:42.013343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.013385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.026661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ebfd0 00:46:22.424 [2024-07-22 16:59:42.028158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.028185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.039207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fef90 00:46:22.424 [2024-07-22 16:59:42.040462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.040494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.051816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6020 00:46:22.424 [2024-07-22 16:59:42.053247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:22.424 [2024-07-22 16:59:42.063628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e9e10 00:46:22.424 [2024-07-22 16:59:42.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.424 [2024-07-22 16:59:42.064572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.076319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e01f8 00:46:22.685 [2024-07-22 16:59:42.077268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.077300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.089102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ed4e8 00:46:22.685 [2024-07-22 16:59:42.090019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.090044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.101983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f2948 00:46:22.685 [2024-07-22 16:59:42.102932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.102970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.114637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e8d30 00:46:22.685 [2024-07-22 16:59:42.115599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.115630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.127359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fda78 00:46:22.685 [2024-07-22 16:59:42.128316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.128348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.140058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eee38 00:46:22.685 [2024-07-22 16:59:42.141014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.141040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.152646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fe2e8 00:46:22.685 [2024-07-22 16:59:42.153598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.153629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.165298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190feb58 00:46:22.685 [2024-07-22 16:59:42.166269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.166312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.177951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e4578 00:46:22.685 [2024-07-22 16:59:42.178900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.178932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.190577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4298 00:46:22.685 [2024-07-22 16:59:42.191558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.191589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.203242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f35f0 00:46:22.685 [2024-07-22 16:59:42.204163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.204189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.216370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f3a28 00:46:22.685 [2024-07-22 16:59:42.217108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.217134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.229332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ddc00 00:46:22.685 [2024-07-22 16:59:42.230464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.242034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190eb760 00:46:22.685 [2024-07-22 16:59:42.243132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.243162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.254635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ee190 00:46:22.685 [2024-07-22 16:59:42.255774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.255806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.267425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ed0b0 00:46:22.685 [2024-07-22 16:59:42.268561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.280114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fb480 00:46:22.685 [2024-07-22 16:59:42.281215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.281241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.292736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e0ea0 00:46:22.685 [2024-07-22 16:59:42.293857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.293888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.305464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f0bc0 00:46:22.685 [2024-07-22 16:59:42.306594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.306625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.318133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f7da8 00:46:22.685 [2024-07-22 16:59:42.319237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.685 [2024-07-22 16:59:42.319264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.685 [2024-07-22 16:59:42.330912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6cc8 00:46:22.685 [2024-07-22 16:59:42.332054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.686 [2024-07-22 16:59:42.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.343809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f92c0 00:46:22.943 [2024-07-22 16:59:42.344929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.344960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.356428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f6890 00:46:22.943 [2024-07-22 16:59:42.357565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.357597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.369097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f8a50 00:46:22.943 [2024-07-22 16:59:42.370234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.370280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.381714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190ec840 00:46:22.943 [2024-07-22 16:59:42.382832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.394362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190e49b0 00:46:22.943 [2024-07-22 16:59:42.395487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.395519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.407047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f0ff8 00:46:22.943 [2024-07-22 16:59:42.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.408207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.419684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190fc128 00:46:22.943 [2024-07-22 16:59:42.420786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.420817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 [2024-07-22 16:59:42.432335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243a910) with pdu=0x2000190f4b08 00:46:22.943 [2024-07-22 16:59:42.433467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:22.943 [2024-07-22 16:59:42.433500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:22.943 00:46:22.943 Latency(us) 00:46:22.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:22.943 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:46:22.943 nvme0n1 : 2.01 21468.00 83.86 0.00 0.00 5952.69 2257.35 16893.72 00:46:22.943 =================================================================================================================== 00:46:22.943 Total : 21468.00 83.86 0.00 0.00 5952.69 2257.35 16893.72 00:46:22.943 0 00:46:22.943 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:46:22.943 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:46:22.943 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:46:22.943 | .driver_specific 00:46:22.943 | .nvme_error 00:46:22.943 | .status_code 00:46:22.943 | .command_transient_transport_error' 00:46:22.943 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959460 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2959460 ']' 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2959460 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2959460 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2959460' 00:46:23.201 killing process with pid 2959460 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2959460 00:46:23.201 Received shutdown signal, test time was about 2.000000 seconds 00:46:23.201 00:46:23.201 Latency(us) 00:46:23.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:23.201 =================================================================================================================== 00:46:23.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:23.201 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2959460 00:46:23.458 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:46:23.458 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959877 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959877 /var/tmp/bperf.sock 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2959877 ']' 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:23.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:23.459 16:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:23.459 [2024-07-22 16:59:43.026410] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:23.459 [2024-07-22 16:59:43.026487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959877 ] 00:46:23.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:23.459 Zero copy mechanism will not be used. 00:46:23.459 EAL: No free 2048 kB hugepages reported on node 1 00:46:23.459 [2024-07-22 16:59:43.095633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.717 [2024-07-22 16:59:43.186123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:23.717 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:23.717 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:46:23.717 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:23.717 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:23.974 16:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:46:24.540 nvme0n1 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:46:24.540 16:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:24.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:46:24.540 Zero copy mechanism will not be used. 00:46:24.540 Running I/O for 2 seconds... 00:46:24.540 [2024-07-22 16:59:44.142727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.143099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.143134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.540 [2024-07-22 16:59:44.151451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.151847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.540 [2024-07-22 16:59:44.159206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.159560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.159593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.540 [2024-07-22 16:59:44.166918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.167233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.167278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.540 [2024-07-22 16:59:44.174316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.174704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.540 [2024-07-22 16:59:44.182353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.540 [2024-07-22 16:59:44.182729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.540 [2024-07-22 16:59:44.182763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.190275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.190726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.190759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.197919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.198308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.198343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.205433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.205860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.205894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.212749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.213137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.213167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.220935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.221345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.221387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.229037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.229409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.229448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.237082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.237433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.245328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.245737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.245771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.254296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.254698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.254740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.262231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.262578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.262611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.269947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.270393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.270431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.277292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.277691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.277732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.284406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.284753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.284786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.292110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.292512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.292556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.300009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.300371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.300404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.307280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.307627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.307659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.314724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.315143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.315188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.322346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.322686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.322718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.330028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.330359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.338237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.338681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.338714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.346175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.346507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.346540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.354291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.354701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.354743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.361870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.362196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.362224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.369936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.370275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.370302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.378069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.378499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.378537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.386505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.799 [2024-07-22 16:59:44.386843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.799 [2024-07-22 16:59:44.386875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.799 [2024-07-22 16:59:44.395039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.395421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.395455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.402629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.402976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.403021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.410076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.410399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.410433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.418314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.418755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.418788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.427031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.427369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.427401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.434028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.434332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.434365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:24.800 [2024-07-22 16:59:44.441423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:24.800 [2024-07-22 16:59:44.441761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:24.800 [2024-07-22 16:59:44.441794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.448880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.449208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.449243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.456031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.456312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.456345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.464126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.464423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.464462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.472193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.472500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.472533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.480023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.480284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.480311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.487419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.487726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.487760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.495204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.495514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.495546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.503038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.503319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.503352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.510785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.511100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.511128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.518319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.518649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.518681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.525462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.525772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.525804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.532186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.532503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.532535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.539284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.539618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.539651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.546751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.547093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.547120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.553981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.554308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.554340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.561229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.561592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.058 [2024-07-22 16:59:44.561624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.058 [2024-07-22 16:59:44.568711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.058 [2024-07-22 16:59:44.569074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.569101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.575859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.576155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.583188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.583494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.583526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.589942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.590267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.596832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.597136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.597163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.603775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.604087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.604114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.611132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.611460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.611492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.618539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.618906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.625560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.633323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.633628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.633660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.640163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.640473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.640512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.647620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.647924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.655229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.655578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.662924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.663269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.671818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.672125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.672153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.678898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.679180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.679207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.685730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.686078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.686105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.692797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.693121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.693149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.699518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.059 [2024-07-22 16:59:44.699820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.059 [2024-07-22 16:59:44.699853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.059 [2024-07-22 16:59:44.706938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.707273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.713571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.713875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.713917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.720586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.720908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.728323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.728644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.728677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.736221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.736574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.736606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.743995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.744289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.751642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.751957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.752011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.759387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.759694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.759726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.766535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.766840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.766873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.773422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.773730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.773762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.318 [2024-07-22 16:59:44.780143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.318 [2024-07-22 16:59:44.780449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.318 [2024-07-22 16:59:44.780482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.786602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.786860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.786887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.793855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.794158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.794187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.800960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.801303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.801336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.808686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.809105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.817390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.817729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.817762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.826025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.826320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.835275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.835601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.835641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.843812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.844172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.850788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.851102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.851129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.857529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.857833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.857866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.865010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.865284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.865311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.873274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.873731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.882149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.882537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.882570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.888981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.889295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.894913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.895216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.900886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.901189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.901217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.906894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.907228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.913046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.913325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.913366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.919014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.919335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.925009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.925262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.925303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.931148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.931461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.931505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.937515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.937807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.937839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.944739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.945061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.945088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.952750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.953056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.953088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.319 [2024-07-22 16:59:44.961319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.319 [2024-07-22 16:59:44.961717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.319 [2024-07-22 16:59:44.961750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:44.969655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:44.970063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:44.970099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:44.978092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:44.978435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:44.978468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:44.986413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:44.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:44.986837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:44.994874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:44.995150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:44.995178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.003098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.003429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.003463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.011563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.011929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.011962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.019543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.019918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.026064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.026343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.026376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.032030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.032304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.032347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.038301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.038598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.038631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.044673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.044976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.050675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.050974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.051018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.057123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.057435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.057469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.064644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.064945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.064986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.071577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.071870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.578 [2024-07-22 16:59:45.071903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.578 [2024-07-22 16:59:45.078614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.578 [2024-07-22 16:59:45.078907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.078939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.085948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.086227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.086255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.093482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.093765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.093800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.100356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.100646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.100673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.107515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.107805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.114851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.115172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.115199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.122183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.122441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.129661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.129971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.130019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.136830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.137122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.137149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.144817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.145188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.145221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.153542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.153851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.162548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.162975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.163008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.171309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.171728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.171761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.179746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.180097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.180124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.188080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.188354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.188387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.196743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.197078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.205850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.206235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.206269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.214802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.215191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.215220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.579 [2024-07-22 16:59:45.223638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.579 [2024-07-22 16:59:45.224030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.579 [2024-07-22 16:59:45.224060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.232570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.232946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.233012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.241057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.241351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.241384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.249622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.250044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.250075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.258495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.258924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.258957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.267394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.267734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.267777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.275449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.275832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.275876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.283927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.284338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.284371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.292627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.293012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.293040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.300947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.301300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.301333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.309691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.310120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.310157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.318553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.318845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.318889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.327598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.327924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.327977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.336320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.336702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.336735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.345459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.345777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.345809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.354503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.354905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.354937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.363903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.364275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.364317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.372143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.372532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.372573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.381017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.381285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.381312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.838 [2024-07-22 16:59:45.389913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.838 [2024-07-22 16:59:45.390189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.838 [2024-07-22 16:59:45.390217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.398784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.399175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.407022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.407298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.407330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.416223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.416570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.416603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.423861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.424160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.424188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.431130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.431543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.437659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.437959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.438002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.443552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.443854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.443886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.449537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.449825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.449857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.455728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.456041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.456068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.461586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.461880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.461913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.467723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.468041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.468067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.473889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.474179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.474206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:25.839 [2024-07-22 16:59:45.480041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:25.839 [2024-07-22 16:59:45.480308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:25.839 [2024-07-22 16:59:45.480339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.487382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.487666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.487697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.493894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.494174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.494201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.500259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.500557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.500589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.506285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.506579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.506611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.512553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.512846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.512878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.518570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.518858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.518890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.524437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.524728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.524771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.530421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.530718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.530750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.536710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.537021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.537047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.543481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.543779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.549939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.550225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.550282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.556155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.556470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.556502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.562307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.562609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.562641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.568421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.568712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.568744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.574783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.575087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.575114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.580722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.581036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.581062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.586719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.587060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.592768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.593076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.593102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.598834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.599131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.599158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.604845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.605131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.605158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.610795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.611098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.611126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.616595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.616886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.098 [2024-07-22 16:59:45.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.098 [2024-07-22 16:59:45.623227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.098 [2024-07-22 16:59:45.623534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.623567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.629387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.629685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.629718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.635634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.635929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.635962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.642545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.642867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.648730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.649039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.649066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.654910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.655216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.655249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.660838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.661133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.661159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.666827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.667121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.667148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.672861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.673154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.679286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.679581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.679612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.685516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.685806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.685837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.691541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.691834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.691866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.697799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.698099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.698126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.703862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.704152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.704179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.709928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.710241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.710269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.716264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.716573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.716605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.722200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.722516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.722547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.728235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.728540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.728572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.734541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.734831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.734863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.099 [2024-07-22 16:59:45.740810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.099 [2024-07-22 16:59:45.741111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.099 [2024-07-22 16:59:45.741139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.746897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.358 [2024-07-22 16:59:45.747213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.358 [2024-07-22 16:59:45.747256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.753073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.358 [2024-07-22 16:59:45.753359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.358 [2024-07-22 16:59:45.753391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.759075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.358 [2024-07-22 16:59:45.759363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.358 [2024-07-22 16:59:45.759395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.765182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.358 [2024-07-22 16:59:45.765469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.358 [2024-07-22 16:59:45.765501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.771110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.358 [2024-07-22 16:59:45.771397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.358 [2024-07-22 16:59:45.771429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.358 [2024-07-22 16:59:45.777204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.777491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.783396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.783688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.783720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.789371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.789666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.789698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.795212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.795501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.795533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.801270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.801600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.808130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.808479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.808512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.815420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.815710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.815748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.823944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.824217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.824260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.832403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.832859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.840835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.841149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.841177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.848751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.849148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.849181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.856822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.857184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.857211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.865088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.865491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.865529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.873432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.873814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.873846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.881789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.882171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.882198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.889889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.890211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.890238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.897962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.898359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.898391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.906282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.906603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.906635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.913692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.913990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.914033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.920124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.927341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.927642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.934598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.934889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.934922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.940770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.941116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.947014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.947300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.947344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.953544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.953888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.960872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.961162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.961191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.967093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.359 [2024-07-22 16:59:45.967392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.359 [2024-07-22 16:59:45.967424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.359 [2024-07-22 16:59:45.973295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:45.973600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:45.973632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.360 [2024-07-22 16:59:45.979714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:45.980027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:45.980056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.360 [2024-07-22 16:59:45.986113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:45.986418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:45.986450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.360 [2024-07-22 16:59:45.992311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:45.992617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:45.992649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.360 [2024-07-22 16:59:45.998976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:45.999259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:45.999285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.360 [2024-07-22 16:59:46.005887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.360 [2024-07-22 16:59:46.006203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.360 [2024-07-22 16:59:46.006237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.012190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.012535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.018219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.018535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.018573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.024516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.024807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.024839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.030639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.030934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.030984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.037058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.037352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.037384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.043077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.043367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.043399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.049154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.618 [2024-07-22 16:59:46.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.618 [2024-07-22 16:59:46.049513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.618 [2024-07-22 16:59:46.055526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.055817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.061955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.062234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.062292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.068099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.068368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.068400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.074216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.074500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.074531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.080217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.080503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.080535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.086112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.086389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.086421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.092147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.092441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.092472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.098143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.098440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.098472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.104119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.104433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.110116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.110414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.110451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.116203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.116507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.116538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.123796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.124115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.124141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:26.619 [2024-07-22 16:59:46.130167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x243ac50) with pdu=0x2000190fef90 00:46:26.619 [2024-07-22 16:59:46.130460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:26.619 [2024-07-22 16:59:46.130492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:26.619 00:46:26.619 Latency(us) 00:46:26.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:26.619 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:46:26.619 nvme0n1 : 2.00 4285.82 535.73 0.00 0.00 3724.88 2742.80 11213.94 00:46:26.619 =================================================================================================================== 00:46:26.619 Total : 4285.82 535.73 0.00 0.00 3724.88 2742.80 11213.94 00:46:26.619 0 00:46:26.619 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:46:26.619 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:46:26.619 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:46:26.619 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:46:26.619 | .driver_specific 00:46:26.619 | .nvme_error 00:46:26.619 | .status_code 00:46:26.619 | .command_transient_transport_error' 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 276 > 0 )) 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959877 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2959877 ']' 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2959877 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2959877 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2959877' 00:46:26.877 killing process with pid 2959877 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2959877 00:46:26.877 Received shutdown signal, test time was about 2.000000 seconds 00:46:26.877 00:46:26.877 Latency(us) 00:46:26.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:26.877 =================================================================================================================== 00:46:26.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:26.877 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2959877 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2958384 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2958384 ']' 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2958384 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2958384 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2958384' 00:46:27.134 killing process with pid 2958384 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2958384 00:46:27.134 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2958384 00:46:27.392 00:46:27.392 real 0m15.913s 00:46:27.392 user 0m31.384s 00:46:27.392 sys 0m4.735s 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:46:27.392 ************************************ 00:46:27.392 END TEST nvmf_digest_error 00:46:27.392 ************************************ 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:27.392 16:59:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:27.392 rmmod nvme_tcp 00:46:27.392 rmmod nvme_fabrics 00:46:27.392 rmmod nvme_keyring 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2958384 ']' 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2958384 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 2958384 ']' 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 2958384 00:46:27.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2958384) - No such process 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 2958384 is not found' 00:46:27.392 Process with pid 2958384 is not found 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:27.392 16:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:29.920 16:59:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:29.920 00:46:29.920 real 0m36.821s 00:46:29.920 user 1m2.931s 00:46:29.920 sys 0m11.547s 00:46:29.920 16:59:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:29.920 16:59:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:46:29.920 ************************************ 00:46:29.920 END TEST nvmf_digest 00:46:29.920 ************************************ 00:46:29.920 16:59:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:46:29.920 16:59:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:46:29.920 16:59:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:46:29.920 16:59:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:46:29.920 16:59:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:46:29.920 16:59:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:29.920 16:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:29.920 ************************************ 00:46:29.920 START TEST nvmf_bdevperf 00:46:29.920 ************************************ 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:46:29.920 * Looking for test storage... 00:46:29.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.920 16:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:46:29.921 16:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:46:32.448 Found 0000:82:00.0 (0x8086 - 0x159b) 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:32.448 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:46:32.449 Found 0000:82:00.1 (0x8086 - 0x159b) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:46:32.449 Found net devices under 0000:82:00.0: cvl_0_0 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:46:32.449 Found net devices under 0000:82:00.1: cvl_0_1 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:32.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:32.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:46:32.449 00:46:32.449 --- 10.0.0.2 ping statistics --- 00:46:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:32.449 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:32.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:32.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:46:32.449 00:46:32.449 --- 10.0.0.1 ping statistics --- 00:46:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:32.449 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2962637 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2962637 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2962637 ']' 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:32.449 16:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:32.449 [2024-07-22 16:59:51.820291] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:32.449 [2024-07-22 16:59:51.820370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:32.449 EAL: No free 2048 kB hugepages reported on node 1 00:46:32.449 [2024-07-22 16:59:51.905126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:32.449 [2024-07-22 16:59:51.996619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:32.449 [2024-07-22 16:59:51.996681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:32.449 [2024-07-22 16:59:51.996698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:32.449 [2024-07-22 16:59:51.996711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:32.449 [2024-07-22 16:59:51.996723] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:32.449 [2024-07-22 16:59:51.996823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:32.449 [2024-07-22 16:59:51.996916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:32.449 [2024-07-22 16:59:51.996919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 [2024-07-22 16:59:52.776772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 Malloc0 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:33.382 [2024-07-22 16:59:52.837614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:46:33.382 { 00:46:33.382 "params": { 00:46:33.382 "name": "Nvme$subsystem", 00:46:33.382 "trtype": "$TEST_TRANSPORT", 00:46:33.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:33.382 "adrfam": "ipv4", 00:46:33.382 "trsvcid": "$NVMF_PORT", 00:46:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:33.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:33.382 "hdgst": ${hdgst:-false}, 00:46:33.382 "ddgst": ${ddgst:-false} 00:46:33.382 }, 00:46:33.382 "method": "bdev_nvme_attach_controller" 00:46:33.382 } 00:46:33.382 EOF 00:46:33.382 )") 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:46:33.382 16:59:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:46:33.382 "params": { 00:46:33.382 "name": "Nvme1", 00:46:33.382 "trtype": "tcp", 00:46:33.382 "traddr": "10.0.0.2", 00:46:33.382 "adrfam": "ipv4", 00:46:33.382 "trsvcid": "4420", 00:46:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:33.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:33.382 "hdgst": false, 00:46:33.382 "ddgst": false 00:46:33.382 }, 00:46:33.382 "method": "bdev_nvme_attach_controller" 00:46:33.382 }' 00:46:33.382 [2024-07-22 16:59:52.888004] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:33.382 [2024-07-22 16:59:52.888085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962745 ] 00:46:33.382 EAL: No free 2048 kB hugepages reported on node 1 00:46:33.382 [2024-07-22 16:59:52.957447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:33.639 [2024-07-22 16:59:53.047892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.639 Running I/O for 1 seconds... 00:46:35.011 00:46:35.012 Latency(us) 00:46:35.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:35.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:35.012 Verification LBA range: start 0x0 length 0x4000 00:46:35.012 Nvme1n1 : 1.01 8942.20 34.93 0.00 0.00 14259.61 3058.35 19223.89 00:46:35.012 =================================================================================================================== 00:46:35.012 Total : 8942.20 34.93 0.00 0.00 14259.61 3058.35 19223.89 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2962930 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:46:35.012 { 00:46:35.012 "params": { 00:46:35.012 "name": "Nvme$subsystem", 00:46:35.012 "trtype": "$TEST_TRANSPORT", 00:46:35.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:35.012 "adrfam": "ipv4", 00:46:35.012 "trsvcid": "$NVMF_PORT", 00:46:35.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:35.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:35.012 "hdgst": ${hdgst:-false}, 00:46:35.012 "ddgst": ${ddgst:-false} 00:46:35.012 }, 00:46:35.012 "method": "bdev_nvme_attach_controller" 00:46:35.012 } 00:46:35.012 EOF 00:46:35.012 )") 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:46:35.012 16:59:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:46:35.012 "params": { 00:46:35.012 "name": "Nvme1", 00:46:35.012 "trtype": "tcp", 00:46:35.012 "traddr": "10.0.0.2", 00:46:35.012 "adrfam": "ipv4", 00:46:35.012 "trsvcid": "4420", 00:46:35.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:35.012 "hdgst": false, 00:46:35.012 "ddgst": false 00:46:35.012 }, 00:46:35.012 "method": "bdev_nvme_attach_controller" 00:46:35.012 }' 00:46:35.012 [2024-07-22 16:59:54.492944] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:35.012 [2024-07-22 16:59:54.493056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962930 ] 00:46:35.012 EAL: No free 2048 kB hugepages reported on node 1 00:46:35.012 [2024-07-22 16:59:54.562694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:35.012 [2024-07-22 16:59:54.646629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:35.577 Running I/O for 15 seconds... 00:46:38.107 16:59:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2962637 00:46:38.107 16:59:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:46:38.107 [2024-07-22 16:59:57.466032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.107 [2024-07-22 16:59:57.466100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.107 [2024-07-22 16:59:57.466158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.107 [2024-07-22 16:59:57.466192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.107 [2024-07-22 16:59:57.466780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.466957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.107 [2024-07-22 16:59:57.467289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.107 [2024-07-22 16:59:57.467320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.108 [2024-07-22 16:59:57.467335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.467962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.467987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.108 [2024-07-22 16:59:57.468557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.108 [2024-07-22 16:59:57.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.468983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.468999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:38.109 [2024-07-22 16:59:57.469309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.109 [2024-07-22 16:59:57.469855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.109 [2024-07-22 16:59:57.469872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.469887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.469907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.469924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.469940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.469956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.469981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.469998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:38.110 [2024-07-22 16:59:57.470352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179e150 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.470387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:46:38.110 [2024-07-22 16:59:57.470400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:46:38.110 [2024-07-22 16:59:57.470414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48944 len:8 PRP1 0x0 PRP2 0x0 00:46:38.110 [2024-07-22 16:59:57.470428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470499] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x179e150 was disconnected and freed. reset controller. 00:46:38.110 [2024-07-22 16:59:57.470579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:38.110 [2024-07-22 16:59:57.470604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:38.110 [2024-07-22 16:59:57.470637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:38.110 [2024-07-22 16:59:57.470667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:38.110 [2024-07-22 16:59:57.470696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:38.110 [2024-07-22 16:59:57.470711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.474559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.110 [2024-07-22 16:59:57.474602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.110 [2024-07-22 16:59:57.475380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.110 [2024-07-22 16:59:57.475417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.110 [2024-07-22 16:59:57.475435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.475676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.110 [2024-07-22 16:59:57.475923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.110 [2024-07-22 16:59:57.475948] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.110 [2024-07-22 16:59:57.475978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.110 [2024-07-22 16:59:57.479607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.110 [2024-07-22 16:59:57.488735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.110 [2024-07-22 16:59:57.489237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.110 [2024-07-22 16:59:57.489282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.110 [2024-07-22 16:59:57.489298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.489547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.110 [2024-07-22 16:59:57.489791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.110 [2024-07-22 16:59:57.489815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.110 [2024-07-22 16:59:57.489831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.110 [2024-07-22 16:59:57.493415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.110 [2024-07-22 16:59:57.502719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.110 [2024-07-22 16:59:57.503210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.110 [2024-07-22 16:59:57.503243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.110 [2024-07-22 16:59:57.503261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.503500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.110 [2024-07-22 16:59:57.503743] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.110 [2024-07-22 16:59:57.503767] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.110 [2024-07-22 16:59:57.503783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.110 [2024-07-22 16:59:57.507368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.110 [2024-07-22 16:59:57.516656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.110 [2024-07-22 16:59:57.517155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.110 [2024-07-22 16:59:57.517195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.110 [2024-07-22 16:59:57.517213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.517453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.110 [2024-07-22 16:59:57.517697] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.110 [2024-07-22 16:59:57.517720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.110 [2024-07-22 16:59:57.517735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.110 [2024-07-22 16:59:57.521320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.110 [2024-07-22 16:59:57.530612] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.110 [2024-07-22 16:59:57.531093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.110 [2024-07-22 16:59:57.531125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.110 [2024-07-22 16:59:57.531143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.110 [2024-07-22 16:59:57.531383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.531632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.531656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.531671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.535254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.544556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.545097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.545160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.545177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.545416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.545660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.545683] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.545698] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.549284] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.558581] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.559103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.559153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.559171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.559409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.559652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.559676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.559691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.563277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.572571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.573037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.573096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.573114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.573353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.573595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.573619] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.573634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.577222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.586524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.587039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.587071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.587089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.587328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.587572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.587595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.587611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.591238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.600534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.601067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.601099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.601116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.601355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.601599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.601622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.601637] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.605228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.614516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.615000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.615051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.615069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.615308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.615551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.615575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.615591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.619177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.628514] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.629023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.111 [2024-07-22 16:59:57.629057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.111 [2024-07-22 16:59:57.629080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.111 [2024-07-22 16:59:57.629320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.111 [2024-07-22 16:59:57.629563] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.111 [2024-07-22 16:59:57.629587] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.111 [2024-07-22 16:59:57.629603] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.111 [2024-07-22 16:59:57.633184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.111 [2024-07-22 16:59:57.642481] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.111 [2024-07-22 16:59:57.643014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.643045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.643063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.643302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.643545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.643569] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.643584] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.647169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.656464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.656979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.657011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.657028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.657267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.657510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.657533] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.657548] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.661130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.670413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.671018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.671064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.671084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.671330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.671574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.671604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.671621] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.675210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.684303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.684913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.684957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.684992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.685239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.685483] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.685507] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.685523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.689107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.698185] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.698692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.698744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.698762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.699016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.699260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.699284] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.699300] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.702874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.712171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.712657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.712689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.712707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.712946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.713201] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.713226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.713241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.716810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.726240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.726779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.726812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.726830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.727082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.727326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.727350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.727365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.731014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.112 [2024-07-22 16:59:57.740244] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.112 [2024-07-22 16:59:57.740744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.112 [2024-07-22 16:59:57.740776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.112 [2024-07-22 16:59:57.740799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.112 [2024-07-22 16:59:57.741058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.112 [2024-07-22 16:59:57.741308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.112 [2024-07-22 16:59:57.741333] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.112 [2024-07-22 16:59:57.741348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.112 [2024-07-22 16:59:57.744983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.371 [2024-07-22 16:59:57.754221] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.371 [2024-07-22 16:59:57.754767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.371 [2024-07-22 16:59:57.754799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.371 [2024-07-22 16:59:57.754817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.371 [2024-07-22 16:59:57.755069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.371 [2024-07-22 16:59:57.755314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.371 [2024-07-22 16:59:57.755338] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.371 [2024-07-22 16:59:57.755354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.371 [2024-07-22 16:59:57.758930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.371 [2024-07-22 16:59:57.768249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.371 [2024-07-22 16:59:57.768764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.371 [2024-07-22 16:59:57.768816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.371 [2024-07-22 16:59:57.768834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.371 [2024-07-22 16:59:57.769090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.371 [2024-07-22 16:59:57.769334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.371 [2024-07-22 16:59:57.769358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.371 [2024-07-22 16:59:57.769374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.371 [2024-07-22 16:59:57.772949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.371 [2024-07-22 16:59:57.782239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.371 [2024-07-22 16:59:57.782733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.371 [2024-07-22 16:59:57.782765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.371 [2024-07-22 16:59:57.782782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.371 [2024-07-22 16:59:57.783034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.371 [2024-07-22 16:59:57.783278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.371 [2024-07-22 16:59:57.783302] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.371 [2024-07-22 16:59:57.783318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.371 [2024-07-22 16:59:57.786887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.371 [2024-07-22 16:59:57.796173] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.371 [2024-07-22 16:59:57.796675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.371 [2024-07-22 16:59:57.796707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.371 [2024-07-22 16:59:57.796724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.371 [2024-07-22 16:59:57.796975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.371 [2024-07-22 16:59:57.797220] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.371 [2024-07-22 16:59:57.797244] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.371 [2024-07-22 16:59:57.797259] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.371 [2024-07-22 16:59:57.800832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.371 [2024-07-22 16:59:57.810125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.371 [2024-07-22 16:59:57.810720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.371 [2024-07-22 16:59:57.810764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.371 [2024-07-22 16:59:57.810784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.371 [2024-07-22 16:59:57.811045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.371 [2024-07-22 16:59:57.811291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.371 [2024-07-22 16:59:57.811315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.371 [2024-07-22 16:59:57.811338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.814915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.824001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.824540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.824574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.824592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.824832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.825091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.825116] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.825132] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.828703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.838041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.838563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.838597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.838615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.838854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.839110] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.839135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.839151] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.842725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.852026] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.852473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.852505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.852522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.852761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.853019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.853043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.853059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.856630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.865921] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.866460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.866493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.866511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.866750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.867007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.867032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.867047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.870620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.879913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.880432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.880464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.880481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.880720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.880975] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.880999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.881014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.884584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.893871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.894423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.894473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.894490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.894729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.894984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.895009] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.895024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.898595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.907885] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.908377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.908408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.908426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.908664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.908913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.908937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.908953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.912553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.921840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.922346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.922378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.922396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.922635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.922878] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.922901] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.922917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.926520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.935823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.936216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.936248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.936266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.936505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.936748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.936772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.936788] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.372 [2024-07-22 16:59:57.940364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.372 [2024-07-22 16:59:57.949861] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.372 [2024-07-22 16:59:57.950282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.372 [2024-07-22 16:59:57.950314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.372 [2024-07-22 16:59:57.950332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.372 [2024-07-22 16:59:57.950571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.372 [2024-07-22 16:59:57.950814] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.372 [2024-07-22 16:59:57.950838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.372 [2024-07-22 16:59:57.950854] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.373 [2024-07-22 16:59:57.954439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.373 [2024-07-22 16:59:57.963724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.373 [2024-07-22 16:59:57.964121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.373 [2024-07-22 16:59:57.964153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.373 [2024-07-22 16:59:57.964171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.373 [2024-07-22 16:59:57.964411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.373 [2024-07-22 16:59:57.964653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.373 [2024-07-22 16:59:57.964677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.373 [2024-07-22 16:59:57.964693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.373 [2024-07-22 16:59:57.968275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.373 [2024-07-22 16:59:57.977727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.373 [2024-07-22 16:59:57.978213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.373 [2024-07-22 16:59:57.978246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.373 [2024-07-22 16:59:57.978263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.373 [2024-07-22 16:59:57.978503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.373 [2024-07-22 16:59:57.978746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.373 [2024-07-22 16:59:57.978769] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.373 [2024-07-22 16:59:57.978785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.373 [2024-07-22 16:59:57.982366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.373 [2024-07-22 16:59:57.991655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.373 [2024-07-22 16:59:57.992158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.373 [2024-07-22 16:59:57.992190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.373 [2024-07-22 16:59:57.992208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.373 [2024-07-22 16:59:57.992447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.373 [2024-07-22 16:59:57.992690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.373 [2024-07-22 16:59:57.992714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.373 [2024-07-22 16:59:57.992730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.373 [2024-07-22 16:59:57.996330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.373 [2024-07-22 16:59:58.005626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.373 [2024-07-22 16:59:58.006099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.373 [2024-07-22 16:59:58.006132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.373 [2024-07-22 16:59:58.006156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.373 [2024-07-22 16:59:58.006397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.373 [2024-07-22 16:59:58.006639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.373 [2024-07-22 16:59:58.006663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.373 [2024-07-22 16:59:58.006678] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.373 [2024-07-22 16:59:58.010264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.373 [2024-07-22 16:59:58.019608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.020001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.020038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.020058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.020313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.020557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.020580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.020596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.024183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.033491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.033906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.033938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.033955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.034203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.034448] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.034472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.034487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.038068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.047399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.047910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.047942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.047959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.048208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.048457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.048482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.048498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.052075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.061364] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.061912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.061956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.061990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.062237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.062482] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.062506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.062523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.066103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.075403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.075940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.075983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.076003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.076243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.076487] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.076511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.076527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.080104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.089403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.089933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.089991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.090011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.090250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.090494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.090517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.632 [2024-07-22 16:59:58.090533] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.632 [2024-07-22 16:59:58.094115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.632 [2024-07-22 16:59:58.103412] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.632 [2024-07-22 16:59:58.103926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.632 [2024-07-22 16:59:58.103987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.632 [2024-07-22 16:59:58.104007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.632 [2024-07-22 16:59:58.104247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.632 [2024-07-22 16:59:58.104490] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.632 [2024-07-22 16:59:58.104513] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.104529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.108111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.117413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.117908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.117939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.117957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.118208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.118452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.118476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.118492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.122075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.131369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.131893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.131925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.131943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.132194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.132438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.132462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.132478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.136059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.145366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.145856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.145907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.145938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.146191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.146435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.146459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.146475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.150055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.159345] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.159876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.159927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.159945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.160194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.160438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.160462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.160477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.164059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.173365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.173771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.173802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.173820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.174069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.174313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.174337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.174353] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.177925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.187242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.187744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.187775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.187792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.188044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.188288] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.188317] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.188334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.191908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.201216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.201671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.201702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.201720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.201958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.202216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.202239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.202254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.205835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.215154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.215643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.215674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.215691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.215931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.216186] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.216211] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.216227] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.219802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.229194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.229633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.229666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.229684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.229930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.230195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.230222] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.230237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.633 [2024-07-22 16:59:58.233844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.633 [2024-07-22 16:59:58.243174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.633 [2024-07-22 16:59:58.243706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.633 [2024-07-22 16:59:58.243738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.633 [2024-07-22 16:59:58.243756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.633 [2024-07-22 16:59:58.244006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.633 [2024-07-22 16:59:58.244250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.633 [2024-07-22 16:59:58.244273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.633 [2024-07-22 16:59:58.244289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.634 [2024-07-22 16:59:58.247910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.634 [2024-07-22 16:59:58.257206] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.634 [2024-07-22 16:59:58.257708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.634 [2024-07-22 16:59:58.257740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.634 [2024-07-22 16:59:58.257758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.634 [2024-07-22 16:59:58.258009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.634 [2024-07-22 16:59:58.258253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.634 [2024-07-22 16:59:58.258277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.634 [2024-07-22 16:59:58.258293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.634 [2024-07-22 16:59:58.261862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.634 [2024-07-22 16:59:58.271145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.634 [2024-07-22 16:59:58.271628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.634 [2024-07-22 16:59:58.271659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.634 [2024-07-22 16:59:58.271677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.634 [2024-07-22 16:59:58.271916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.634 [2024-07-22 16:59:58.272172] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.634 [2024-07-22 16:59:58.272196] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.634 [2024-07-22 16:59:58.272212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.634 [2024-07-22 16:59:58.275783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.285148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.285688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.285722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.285740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.286006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.286251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.286275] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.893 [2024-07-22 16:59:58.286291] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.893 [2024-07-22 16:59:58.289862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.299154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.299654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.299685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.299703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.299941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.300194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.300219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.893 [2024-07-22 16:59:58.300235] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.893 [2024-07-22 16:59:58.303806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.313103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.313601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.313632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.313649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.313887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.314141] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.314165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.893 [2024-07-22 16:59:58.314181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.893 [2024-07-22 16:59:58.317752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.327047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.327526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.327557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.327575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.327814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.328069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.328093] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.893 [2024-07-22 16:59:58.328114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.893 [2024-07-22 16:59:58.331688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.340984] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.341461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.341493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.341511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.341749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.342004] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.342028] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.893 [2024-07-22 16:59:58.342044] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.893 [2024-07-22 16:59:58.345614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.893 [2024-07-22 16:59:58.354911] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.893 [2024-07-22 16:59:58.355393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.893 [2024-07-22 16:59:58.355426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.893 [2024-07-22 16:59:58.355444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.893 [2024-07-22 16:59:58.355683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.893 [2024-07-22 16:59:58.355934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.893 [2024-07-22 16:59:58.355959] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.355987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.359561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.368853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.369310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.369342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.369359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.369598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.369841] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.369865] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.369881] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.373461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.382739] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.383214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.383252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.383271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.383510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.383754] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.383777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.383793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.387374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.396659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.397166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.397198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.397216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.397455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.397699] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.397723] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.397738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.401319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.410607] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.411111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.411144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.411162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.411401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.411644] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.411668] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.411684] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.415266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.424617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.425124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.425156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.425174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.425413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.425662] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.425686] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.425702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.429287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.438576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.439055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.439087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.439105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.439344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.439587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.439610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.439626] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.443206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.452492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.452949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.452990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.453008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.453247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.453491] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.453515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.453530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.457162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.466457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.466858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.466890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.466908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.467157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.467402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.467427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.467442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.471031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.480456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.480862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.480894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.480913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.481160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.481406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.481431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.481446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.894 [2024-07-22 16:59:58.485032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.894 [2024-07-22 16:59:58.494559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.894 [2024-07-22 16:59:58.495048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.894 [2024-07-22 16:59:58.495081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.894 [2024-07-22 16:59:58.495099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.894 [2024-07-22 16:59:58.495337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.894 [2024-07-22 16:59:58.495580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.894 [2024-07-22 16:59:58.495612] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.894 [2024-07-22 16:59:58.495629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.895 [2024-07-22 16:59:58.499213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.895 [2024-07-22 16:59:58.508511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.895 [2024-07-22 16:59:58.508920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.895 [2024-07-22 16:59:58.508952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.895 [2024-07-22 16:59:58.508979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.895 [2024-07-22 16:59:58.509220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.895 [2024-07-22 16:59:58.509462] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.895 [2024-07-22 16:59:58.509486] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.895 [2024-07-22 16:59:58.509502] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.895 [2024-07-22 16:59:58.513082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.895 [2024-07-22 16:59:58.522390] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.895 [2024-07-22 16:59:58.522788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.895 [2024-07-22 16:59:58.522820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.895 [2024-07-22 16:59:58.522844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.895 [2024-07-22 16:59:58.523093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.895 [2024-07-22 16:59:58.523338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.895 [2024-07-22 16:59:58.523362] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.895 [2024-07-22 16:59:58.523377] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.895 [2024-07-22 16:59:58.526953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:38.895 [2024-07-22 16:59:58.536260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:38.895 [2024-07-22 16:59:58.536695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:38.895 [2024-07-22 16:59:58.536726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:38.895 [2024-07-22 16:59:58.536744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:38.895 [2024-07-22 16:59:58.536992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:38.895 [2024-07-22 16:59:58.537236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:38.895 [2024-07-22 16:59:58.537260] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:38.895 [2024-07-22 16:59:58.537276] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:38.895 [2024-07-22 16:59:58.540886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.550249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.550679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.550710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.550728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.550975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.551232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.551257] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.551273] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.554843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.564147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.564551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.564582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.564600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.564839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.565094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.565124] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.565142] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.568717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.578025] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.578454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.578485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.578503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.578742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.578996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.579020] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.579036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.582608] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.591910] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.592329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.592361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.592379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.592618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.592862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.592886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.592901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.596482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.605784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.606176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.606207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.606225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.606464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.606707] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.606731] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.606747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.610327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.619624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.620114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.620146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.620163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.620402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.620645] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.620669] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.620684] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.624266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.633553] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.634022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.634054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.634072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.634311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.634553] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.634577] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.634592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.638174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.647459] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.647946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.647990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.648009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.648248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.648492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.648515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.648531] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.652110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.661399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.661854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.661886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.661903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.154 [2024-07-22 16:59:58.662164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.154 [2024-07-22 16:59:58.662409] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.154 [2024-07-22 16:59:58.662433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.154 [2024-07-22 16:59:58.662448] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.154 [2024-07-22 16:59:58.666077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.154 [2024-07-22 16:59:58.675361] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.154 [2024-07-22 16:59:58.675847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.154 [2024-07-22 16:59:58.675878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.154 [2024-07-22 16:59:58.675896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.676146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.676389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.676413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.676429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.680008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.689292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.689771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.689803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.689821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.690073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.690317] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.690340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.690356] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.693926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.703217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.703775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.703819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.703839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.704104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.704350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.704375] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.704397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.707980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.717056] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.717514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.717547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.717565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.717804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.718062] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.718087] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.718102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.721673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.731075] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.731545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.731578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.731595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.731834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.732098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.732124] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.732140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.735758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.745059] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.745571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.745603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.745621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.745860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.746115] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.746139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.746155] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.749729] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.759024] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.759505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.759537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.759555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.759794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.760051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.760076] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.760092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.763660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.772949] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.773453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.773485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.773502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.773741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.773996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.774021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.774037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.777606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.786894] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.787345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.787376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.787394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.787633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.787876] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.787900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.787916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.155 [2024-07-22 16:59:58.791496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.155 [2024-07-22 16:59:58.800816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.155 [2024-07-22 16:59:58.801317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.155 [2024-07-22 16:59:58.801349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.155 [2024-07-22 16:59:58.801366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.155 [2024-07-22 16:59:58.801611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.155 [2024-07-22 16:59:58.801854] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.155 [2024-07-22 16:59:58.801878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.155 [2024-07-22 16:59:58.801894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.805507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.814822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.815313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.815346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.815364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.815603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.815846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.815870] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.815886] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.819463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.828737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.829201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.829233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.829251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.829490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.829732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.829756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.829771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.833350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.842623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.843091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.843123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.843141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.843380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.843623] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.843647] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.843669] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.847255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.856539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.857066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.857098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.857116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.857355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.857598] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.857623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.857639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.861222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.870515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.871025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.871058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.871076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.871316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.871559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.871583] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.871599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.875183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.884468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.884974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.885007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.885025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.885264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.885507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.885531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.885547] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.889125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.898411] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.898934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.898979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.899000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.899240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.899484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.899508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.899523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.903103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.912391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.912905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.912937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.912955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.414 [2024-07-22 16:59:58.913205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.414 [2024-07-22 16:59:58.913449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.414 [2024-07-22 16:59:58.913473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.414 [2024-07-22 16:59:58.913489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.414 [2024-07-22 16:59:58.917063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.414 [2024-07-22 16:59:58.926348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.414 [2024-07-22 16:59:58.926816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.414 [2024-07-22 16:59:58.926848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.414 [2024-07-22 16:59:58.926865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.927115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.927359] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.927383] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.927399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:58.930976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:58.940372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:58.940875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:58.940909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:58.940930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.941180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.941431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.941455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.941471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:58.945050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:58.954323] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:58.954838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:58.954870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:58.954888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.955139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.955383] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.955407] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.955423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:58.959000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:58.968284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:58.968853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:58.968898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:58.968917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.969177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.969422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.969446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.969462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:58.973044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:58.982214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:58.982701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:58.982734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:58.982752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.983013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.983258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.983282] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.983298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:58.986924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:58.996217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:58.996745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:58.996779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:58.996797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:58.997051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:58.997296] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:58.997320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:58.997336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:59.000909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:59.010206] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:59.010721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:59.010754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:59.010772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:59.011024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:59.011269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:59.011292] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:59.011308] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:59.014879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:59.024171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:59.024696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:59.024729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:59.024747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:59.024998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:59.025243] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:59.025267] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:59.025282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:59.028853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:59.038147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:59.038659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:59.038691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:59.038715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:59.038956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.415 [2024-07-22 16:59:59.039211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.415 [2024-07-22 16:59:59.039235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.415 [2024-07-22 16:59:59.039251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.415 [2024-07-22 16:59:59.042823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.415 [2024-07-22 16:59:59.052112] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.415 [2024-07-22 16:59:59.052644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.415 [2024-07-22 16:59:59.052676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.415 [2024-07-22 16:59:59.052693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.415 [2024-07-22 16:59:59.052932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.416 [2024-07-22 16:59:59.053185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.416 [2024-07-22 16:59:59.053210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.416 [2024-07-22 16:59:59.053226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.416 [2024-07-22 16:59:59.056797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.676 [2024-07-22 16:59:59.066156] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.676 [2024-07-22 16:59:59.066635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.676 [2024-07-22 16:59:59.066667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.676 [2024-07-22 16:59:59.066685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.676 [2024-07-22 16:59:59.066924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.676 [2024-07-22 16:59:59.067190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.676 [2024-07-22 16:59:59.067215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.676 [2024-07-22 16:59:59.067231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.676 [2024-07-22 16:59:59.070810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.676 [2024-07-22 16:59:59.080178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.676 [2024-07-22 16:59:59.080658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.676 [2024-07-22 16:59:59.080691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.676 [2024-07-22 16:59:59.080709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.676 [2024-07-22 16:59:59.080948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.676 [2024-07-22 16:59:59.081203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.676 [2024-07-22 16:59:59.081233] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.676 [2024-07-22 16:59:59.081250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.676 [2024-07-22 16:59:59.084826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.676 [2024-07-22 16:59:59.094129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.676 [2024-07-22 16:59:59.094563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.676 [2024-07-22 16:59:59.094595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.094613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.094852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.095106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.095131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.095147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.098722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.108026] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.108434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.108466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.108483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.108723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.108976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.109000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.109016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.112590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.121882] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.122303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.122334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.122352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.122591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.122835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.122859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.122875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.126457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.135751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.136240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.136272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.136289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.136529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.136772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.136796] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.136811] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.140394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.149705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.150207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.150239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.150256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.150495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.150739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.150762] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.150778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.154361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.163657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.164098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.164131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.164149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.164388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.164632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.164656] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.164672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.168252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.177539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.177928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.177960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.177987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.178233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.178477] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.178501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.178517] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.182098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.191393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.191785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.191817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.191834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.192085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.192329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.192353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.192369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.195940] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.205244] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.205716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.205747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.205765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.206015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.206259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.206283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.206298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.209875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.219176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.219695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.219727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.219744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.677 [2024-07-22 16:59:59.219995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.677 [2024-07-22 16:59:59.220239] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.677 [2024-07-22 16:59:59.220263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.677 [2024-07-22 16:59:59.220285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.677 [2024-07-22 16:59:59.223856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.677 [2024-07-22 16:59:59.233101] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.677 [2024-07-22 16:59:59.233608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.677 [2024-07-22 16:59:59.233639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.677 [2024-07-22 16:59:59.233657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.233901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.234165] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.234190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.234207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.237778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.247080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.247574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.247605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.247623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.247862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.248118] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.248142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.248158] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.251732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.261040] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.261550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.261582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.261600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.261840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.262095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.262119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.262135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.265706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.274998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.275448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.275480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.275497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.275736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.275992] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.276016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.276032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.279605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.288961] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.289486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.289518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.289535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.289774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.290031] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.290056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.290071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.293642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.302932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.303447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.303478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.303496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.303734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.303994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.304018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.304034] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.307609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.678 [2024-07-22 16:59:59.316896] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.678 [2024-07-22 16:59:59.317403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.678 [2024-07-22 16:59:59.317435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.678 [2024-07-22 16:59:59.317453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.678 [2024-07-22 16:59:59.317697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.678 [2024-07-22 16:59:59.317941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.678 [2024-07-22 16:59:59.317974] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.678 [2024-07-22 16:59:59.317992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.678 [2024-07-22 16:59:59.321624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.330780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.331268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.331301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.331319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.331559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.331802] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.331827] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.331842] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.335426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.344713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.345225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.345257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.345274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.345513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.345756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.345780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.345795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.349376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.358668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.359172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.359204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.359222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.359461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.359704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.359728] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.359754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.363335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.372627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.373048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.373081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.373100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.373340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.373584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.373608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.373625] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.377207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.386521] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.386995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.387027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.387044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.387284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.387527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.387551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.387567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.391145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.400444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.400888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.400919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.400937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.401186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.401435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.401459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.401475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.405058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.937 [2024-07-22 16:59:59.414345] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.937 [2024-07-22 16:59:59.414827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.937 [2024-07-22 16:59:59.414863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.937 [2024-07-22 16:59:59.414882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.937 [2024-07-22 16:59:59.415133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.937 [2024-07-22 16:59:59.415377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.937 [2024-07-22 16:59:59.415401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.937 [2024-07-22 16:59:59.415416] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.937 [2024-07-22 16:59:59.418993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.428284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.428807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.428840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.428858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.429110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.429354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.429377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.429393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.432962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.442250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.442765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.442797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.442814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.443066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.443310] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.443334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.443350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.446918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.456235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.456680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.456711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.456729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.456977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.457228] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.457252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.457268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.460838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.470129] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.470682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.470728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.470748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.471006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.471251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.471276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.471292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.474864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.484064] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.484523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.484556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.484574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.484813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.485068] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.485093] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.485109] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.488735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.498082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.498644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.498688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.498709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.498954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.499213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.499238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.499254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.502833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.512151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.512640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.512674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.512692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.512932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.513185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.513210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.513226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.517045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.526130] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.526619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.526652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.526669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.526909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.527162] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.527187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.527203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.530780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.540085] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.540684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.540728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.540747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.541009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.541254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.541278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.541294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.544871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.553954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.554482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.938 [2024-07-22 16:59:59.554515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.938 [2024-07-22 16:59:59.554539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.938 [2024-07-22 16:59:59.554781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.938 [2024-07-22 16:59:59.555038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.938 [2024-07-22 16:59:59.555062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.938 [2024-07-22 16:59:59.555078] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.938 [2024-07-22 16:59:59.558649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.938 [2024-07-22 16:59:59.567931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.938 [2024-07-22 16:59:59.568536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.939 [2024-07-22 16:59:59.568581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.939 [2024-07-22 16:59:59.568601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.939 [2024-07-22 16:59:59.568847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.939 [2024-07-22 16:59:59.569108] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.939 [2024-07-22 16:59:59.569134] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.939 [2024-07-22 16:59:59.569150] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:39.939 [2024-07-22 16:59:59.572725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:39.939 [2024-07-22 16:59:59.581840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:39.939 [2024-07-22 16:59:59.582380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:39.939 [2024-07-22 16:59:59.582413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:39.939 [2024-07-22 16:59:59.582431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:39.939 [2024-07-22 16:59:59.582671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:39.939 [2024-07-22 16:59:59.582915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:39.939 [2024-07-22 16:59:59.582939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:39.939 [2024-07-22 16:59:59.582954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.197 [2024-07-22 16:59:59.586598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.197 [2024-07-22 16:59:59.595716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.197 [2024-07-22 16:59:59.596223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.197 [2024-07-22 16:59:59.596257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.596275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.596515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.596758] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.596789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.596806] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.600390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.609686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.610157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.610191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.610208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.610446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.610690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.610713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.610729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.614315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.623614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.624089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.624121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.624140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.624379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.624622] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.624646] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.624662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.628243] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.637547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.637981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.638013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.638031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.638270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.638513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.638537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.638552] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.642171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.651504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.651908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.651940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.651958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.652209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.652453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.652478] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.652493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.656084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.665396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.665839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.665871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.665889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.666137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.666382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.666405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.666422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.670006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.679319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.679722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.679754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.679771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.680022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.680266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.680290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.680306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.683883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.693198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.693621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.693653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.693671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.693916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.694169] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.694193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.694209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.697786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.707143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.707605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.707658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.707676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.707914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.708169] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.708194] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.708210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.711783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.721089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.721493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.721525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.721542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.721781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.722034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.722060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.722076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.725644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.735101] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.735510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.735542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.735560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.735799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.736054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.736079] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.736101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.739675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.748982] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.749439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.749488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.749505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.749743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.750020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.750045] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.750060] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.753635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.762932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.763417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.763469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.763487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.763725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.763977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.764002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.764017] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.767596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.776889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.777373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.777405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.777423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.777662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.777904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.777928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.777944] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.781522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.790824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.791335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.791367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.791385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.791623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.791866] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.791890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.791906] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.795488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.804781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.805277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.805309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.805329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.805567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.805810] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.805834] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.805850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.809438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.818734] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.819267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.819320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.819338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.819577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.819820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.819844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.819859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.823444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.198 [2024-07-22 16:59:59.832752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.198 [2024-07-22 16:59:59.833208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.198 [2024-07-22 16:59:59.833240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.198 [2024-07-22 16:59:59.833258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.198 [2024-07-22 16:59:59.833497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.198 [2024-07-22 16:59:59.833746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.198 [2024-07-22 16:59:59.833770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.198 [2024-07-22 16:59:59.833786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.198 [2024-07-22 16:59:59.837371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.846718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.847232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.847264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.847282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.847521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.847764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.458 [2024-07-22 16:59:59.847787] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.458 [2024-07-22 16:59:59.847803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.458 [2024-07-22 16:59:59.851411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.860699] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.861198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.861230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.861247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.861485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.861729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.458 [2024-07-22 16:59:59.861752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.458 [2024-07-22 16:59:59.861768] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.458 [2024-07-22 16:59:59.865355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.874648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.875172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.875204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.875221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.875460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.875704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.458 [2024-07-22 16:59:59.875727] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.458 [2024-07-22 16:59:59.875743] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.458 [2024-07-22 16:59:59.879336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.888629] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.889140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.889172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.889189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.889428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.889672] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.458 [2024-07-22 16:59:59.889695] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.458 [2024-07-22 16:59:59.889711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.458 [2024-07-22 16:59:59.893302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.902590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.903193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.903238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.903258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.903503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.903748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.458 [2024-07-22 16:59:59.903772] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.458 [2024-07-22 16:59:59.903788] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.458 [2024-07-22 16:59:59.907385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.458 [2024-07-22 16:59:59.916517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.458 [2024-07-22 16:59:59.917053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.458 [2024-07-22 16:59:59.917087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.458 [2024-07-22 16:59:59.917105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.458 [2024-07-22 16:59:59.917345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.458 [2024-07-22 16:59:59.917588] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.917612] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.917628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.921215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 16:59:59.930496] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 16:59:59.930997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 16:59:59.931030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 16:59:59.931053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 16:59:59.931294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 16:59:59.931537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.931561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.931576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.935161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 16:59:59.944441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 16:59:59.944947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 16:59:59.944986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 16:59:59.945005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 16:59:59.945244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 16:59:59.945488] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.945511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.945527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.949106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 16:59:59.958387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 16:59:59.958914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 16:59:59.958998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 16:59:59.959019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 16:59:59.959259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 16:59:59.959502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.959527] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.959542] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.963127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 16:59:59.972415] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 16:59:59.972913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 16:59:59.972945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 16:59:59.972962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 16:59:59.973215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 16:59:59.973465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.973489] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.973505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.977090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 16:59:59.986494] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 16:59:59.986955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 16:59:59.987025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 16:59:59.987043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 16:59:59.987291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 16:59:59.987536] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 16:59:59.987560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 16:59:59.987575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 16:59:59.991196] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 17:00:00.000486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 17:00:00.001044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 17:00:00.001077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 17:00:00.001095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 17:00:00.001334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 17:00:00.001578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 17:00:00.001605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 17:00:00.001622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 17:00:00.005718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 17:00:00.014393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 17:00:00.014874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 17:00:00.014910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 17:00:00.014931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 17:00:00.015186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 17:00:00.015443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 17:00:00.015468] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 17:00:00.015487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 17:00:00.019064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 17:00:00.028387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 17:00:00.028901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 17:00:00.028955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 17:00:00.028983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 17:00:00.029225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 17:00:00.029471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 17:00:00.029495] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 17:00:00.029511] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 17:00:00.033091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 17:00:00.042392] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 17:00:00.042823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 17:00:00.042855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 17:00:00.042873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 17:00:00.043121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 17:00:00.043366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 17:00:00.043390] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.459 [2024-07-22 17:00:00.043406] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.459 [2024-07-22 17:00:00.046983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.459 [2024-07-22 17:00:00.056254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.459 [2024-07-22 17:00:00.056665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.459 [2024-07-22 17:00:00.056697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.459 [2024-07-22 17:00:00.056715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.459 [2024-07-22 17:00:00.056954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.459 [2024-07-22 17:00:00.057206] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.459 [2024-07-22 17:00:00.057231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.460 [2024-07-22 17:00:00.057247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.460 [2024-07-22 17:00:00.060813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.460 [2024-07-22 17:00:00.070093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.460 [2024-07-22 17:00:00.070533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.460 [2024-07-22 17:00:00.070583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.460 [2024-07-22 17:00:00.070617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.460 [2024-07-22 17:00:00.070857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.460 [2024-07-22 17:00:00.071110] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.460 [2024-07-22 17:00:00.071135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.460 [2024-07-22 17:00:00.071151] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.460 [2024-07-22 17:00:00.074720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.460 [2024-07-22 17:00:00.084010] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.460 [2024-07-22 17:00:00.084481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.460 [2024-07-22 17:00:00.084513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.460 [2024-07-22 17:00:00.084530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.460 [2024-07-22 17:00:00.084769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.460 [2024-07-22 17:00:00.085022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.460 [2024-07-22 17:00:00.085048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.460 [2024-07-22 17:00:00.085063] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.460 [2024-07-22 17:00:00.088634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.460 [2024-07-22 17:00:00.097914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.460 [2024-07-22 17:00:00.098418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.460 [2024-07-22 17:00:00.098450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.460 [2024-07-22 17:00:00.098467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.460 [2024-07-22 17:00:00.098706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.460 [2024-07-22 17:00:00.098949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.460 [2024-07-22 17:00:00.098986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.460 [2024-07-22 17:00:00.099003] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.460 [2024-07-22 17:00:00.102602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.718 [2024-07-22 17:00:00.111774] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.718 [2024-07-22 17:00:00.112216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.718 [2024-07-22 17:00:00.112249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.718 [2024-07-22 17:00:00.112267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.718 [2024-07-22 17:00:00.112506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.718 [2024-07-22 17:00:00.112749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.718 [2024-07-22 17:00:00.112778] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.718 [2024-07-22 17:00:00.112794] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.718 [2024-07-22 17:00:00.116375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.125735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.126145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.126178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.126196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.126436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.126679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.126702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.126718] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.130297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.139576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.140017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.140049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.140067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.140306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.140549] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.140573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.140589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.144165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.153446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.153846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.153878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.153896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.154144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.154389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.154413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.154428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.158009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.167297] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.167776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.167813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.167831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.168082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.168326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.168349] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.168365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.171937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.181230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.181619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.181651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.181669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.181907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.182160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.182185] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.182201] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.185771] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.195269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.195732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.195782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.195800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.196049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.196293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.196316] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.196332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.199901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.209197] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.209623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.209654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.209672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.209916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.210168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.210193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.210209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.213782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.223082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.223511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.223543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.223560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.223799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.224054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.224079] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.224095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.227663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.237102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.237583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.237615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.237633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.237871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.238123] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.238148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.238164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.241736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.251033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.251514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.251545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.251563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.251802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.252055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.252080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.252101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.255676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.264989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.265419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.265451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.265469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.265709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.265952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.265987] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.266004] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.269570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.278847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.279276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.279330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.279349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.279587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.279831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.279855] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.279871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.283449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.292735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.293142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.293174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.293192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.293431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.293674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.293698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.293714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.297293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.306584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.306998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.307035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.307054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.307292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.307537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.307560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.307576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.311154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.320460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.321024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.321056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.321074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.321313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.321557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.321580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.321595] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.325169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.334512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.334946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.334985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.335014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.335253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.335496] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.335520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.335536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.339109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.348377] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.348836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.348886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.348903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.349151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.349401] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.349426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.349442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.353022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.719 [2024-07-22 17:00:00.362320] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.719 [2024-07-22 17:00:00.362760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.719 [2024-07-22 17:00:00.362811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.719 [2024-07-22 17:00:00.362829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.719 [2024-07-22 17:00:00.363076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.719 [2024-07-22 17:00:00.363321] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.719 [2024-07-22 17:00:00.363345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.719 [2024-07-22 17:00:00.363367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.719 [2024-07-22 17:00:00.366990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.978 [2024-07-22 17:00:00.376333] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.978 [2024-07-22 17:00:00.376804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.978 [2024-07-22 17:00:00.376855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.978 [2024-07-22 17:00:00.376873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.978 [2024-07-22 17:00:00.377122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.978 [2024-07-22 17:00:00.377366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.978 [2024-07-22 17:00:00.377390] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.978 [2024-07-22 17:00:00.377405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.978 [2024-07-22 17:00:00.380994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.978 [2024-07-22 17:00:00.390293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.978 [2024-07-22 17:00:00.390711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.978 [2024-07-22 17:00:00.390763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.390781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.391029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.391274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.391297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.391313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.394888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.404192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.404610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.404642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.404659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.404898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.405152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.405176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.405192] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.408763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.418064] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.418591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.418622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.418639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.418878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.419130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.419154] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.419170] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.422742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.432042] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.432533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.432573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.432591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.432830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.433086] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.433111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.433127] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.436699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.445987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.446480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.446512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.446535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.446774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.447032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.447057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.447072] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.450643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2962637 Killed "${NVMF_APP[@]}" "$@" 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:40.979 [2024-07-22 17:00:00.459947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.460469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.460501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.460518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.460757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.461010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.461036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.461052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2963629 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2963629 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2963629 ']' 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:40.979 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:40.979 [2024-07-22 17:00:00.464626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.473962] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.474366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.474398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.474421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.474662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.474906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.474931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.474947] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.478529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.487925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.488351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.488384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.488402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.488641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.488885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.488908] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.488924] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.492502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.979 [2024-07-22 17:00:00.501781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.979 [2024-07-22 17:00:00.502160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.979 [2024-07-22 17:00:00.502192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.979 [2024-07-22 17:00:00.502210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.979 [2024-07-22 17:00:00.502449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.979 [2024-07-22 17:00:00.502692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.979 [2024-07-22 17:00:00.502716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.979 [2024-07-22 17:00:00.502732] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.979 [2024-07-22 17:00:00.506328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.510826] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:40.980 [2024-07-22 17:00:00.510894] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:40.980 [2024-07-22 17:00:00.515823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.516234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.516266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.516283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.516529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.516772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.516796] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.516812] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.520388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.529690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.530156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.530188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.530206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.530445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.530691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.530715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.530731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.534308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.543615] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.544124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.544157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.544175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.544417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.544661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.544686] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.544702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.548280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.557560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.558040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.558072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.558090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.558329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.558576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.558600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.558621] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 EAL: No free 2048 kB hugepages reported on node 1 00:46:40.980 [2024-07-22 17:00:00.562201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.571484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.571926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.571957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.571986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.572226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.572480] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.572504] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.572519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.576098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.585384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.585816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.585848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.585866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.586113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.586357] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.586381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.586397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.589975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.599264] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.599733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.599764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.599782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.600033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.600276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.600300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.600316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.600635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:40.980 [2024-07-22 17:00:00.603887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:40.980 [2024-07-22 17:00:00.613234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:40.980 [2024-07-22 17:00:00.613781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:40.980 [2024-07-22 17:00:00.613823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:40.980 [2024-07-22 17:00:00.613846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:40.980 [2024-07-22 17:00:00.614105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:40.980 [2024-07-22 17:00:00.614356] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:40.980 [2024-07-22 17:00:00.614381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:40.980 [2024-07-22 17:00:00.614400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:40.980 [2024-07-22 17:00:00.617982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.239 [2024-07-22 17:00:00.627327] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.239 [2024-07-22 17:00:00.627794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.239 [2024-07-22 17:00:00.627829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.239 [2024-07-22 17:00:00.627848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.239 [2024-07-22 17:00:00.628102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.239 [2024-07-22 17:00:00.628347] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.239 [2024-07-22 17:00:00.628371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.239 [2024-07-22 17:00:00.628389] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.239 [2024-07-22 17:00:00.631602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.239 [2024-07-22 17:00:00.640710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.239 [2024-07-22 17:00:00.641125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.239 [2024-07-22 17:00:00.641153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.239 [2024-07-22 17:00:00.641169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.239 [2024-07-22 17:00:00.641372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.239 [2024-07-22 17:00:00.641578] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.239 [2024-07-22 17:00:00.641599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.239 [2024-07-22 17:00:00.641614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.239 [2024-07-22 17:00:00.644657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.239 [2024-07-22 17:00:00.654060] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.239 [2024-07-22 17:00:00.654555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.239 [2024-07-22 17:00:00.654603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.239 [2024-07-22 17:00:00.654623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.239 [2024-07-22 17:00:00.654841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.239 [2024-07-22 17:00:00.655070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.239 [2024-07-22 17:00:00.655092] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.239 [2024-07-22 17:00:00.655109] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.658314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.668036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.668500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.668539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.668560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.668815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.669077] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.669099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.669115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.672645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.682008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.682406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.682432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.682461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.682702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.682945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.682988] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.683029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.686550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.692293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:41.240 [2024-07-22 17:00:00.692339] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:41.240 [2024-07-22 17:00:00.692355] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:41.240 [2024-07-22 17:00:00.692369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:41.240 [2024-07-22 17:00:00.692381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:41.240 [2024-07-22 17:00:00.692440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:41.240 [2024-07-22 17:00:00.692556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:41.240 [2024-07-22 17:00:00.692559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:41.240 [2024-07-22 17:00:00.695582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.696002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.696034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.696051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.696284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.696499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.696520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.696535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.699776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.709215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.709803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.709838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.709857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.710125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.710376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.710398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.710415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.713685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.722918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.723528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.723560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.723580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.723807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.724063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.724087] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.724105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.727373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.736497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.737003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.737040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.737061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.737323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.737554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.737597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.737616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.741039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.750006] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.750559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.750594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.750613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.750831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.751090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.751114] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.751131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.754387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.763577] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.764086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.764126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.764147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.764395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.764613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.764644] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.764660] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.767917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.240 [2024-07-22 17:00:00.777211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.240 [2024-07-22 17:00:00.777624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.240 [2024-07-22 17:00:00.777655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.240 [2024-07-22 17:00:00.777671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.240 [2024-07-22 17:00:00.777881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.240 [2024-07-22 17:00:00.778128] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.240 [2024-07-22 17:00:00.778151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.240 [2024-07-22 17:00:00.778178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.240 [2024-07-22 17:00:00.782087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 [2024-07-22 17:00:00.790872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.791306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.791336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.791352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.791563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.791779] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.791801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.791816] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.795089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 [2024-07-22 17:00:00.804539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.804951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.804989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.805007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.805222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.805478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.805500] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.805513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.808793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 [2024-07-22 17:00:00.818104] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.818528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.818556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.818571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.818773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.819006] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.819029] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.819050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.822347] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 [2024-07-22 17:00:00.826062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.241 [2024-07-22 17:00:00.831730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.241 [2024-07-22 17:00:00.832092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.832121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 [2024-07-22 17:00:00.832137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.832353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.832590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.832611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.832625] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.835880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 [2024-07-22 17:00:00.845314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.845769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.845798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.845815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.846041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.846262] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.846299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.846313] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.849599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 [2024-07-22 17:00:00.858850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.859399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.859440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.859460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.859701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.859919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.859941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.859993] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 Malloc0 00:46:41.241 [2024-07-22 17:00:00.863255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 [2024-07-22 17:00:00.872468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 [2024-07-22 17:00:00.872893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:41.241 [2024-07-22 17:00:00.872921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a3e70 with addr=10.0.0.2, port=4420 00:46:41.241 [2024-07-22 17:00:00.872936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3e70 is same with the state(5) to be set 00:46:41.241 [2024-07-22 17:00:00.873174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a3e70 (9): Bad file descriptor 00:46:41.241 [2024-07-22 17:00:00.873406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:41.241 [2024-07-22 17:00:00.873437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:46:41.241 [2024-07-22 17:00:00.873451] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:41.241 [2024-07-22 17:00:00.876693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:41.241 [2024-07-22 17:00:00.882821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:41.241 [2024-07-22 17:00:00.886050] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.241 17:00:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2962930 00:46:41.499 [2024-07-22 17:00:00.920897] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:51.480 00:46:51.480 Latency(us) 00:46:51.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.480 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:51.480 Verification LBA range: start 0x0 length 0x4000 00:46:51.480 Nvme1n1 : 15.01 6756.49 26.39 8438.06 0.00 8399.46 561.30 21942.42 00:46:51.480 =================================================================================================================== 00:46:51.480 Total : 6756.49 26.39 8438.06 0.00 8399.46 561.30 21942.42 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:51.480 rmmod nvme_tcp 00:46:51.480 rmmod nvme_fabrics 00:46:51.480 rmmod nvme_keyring 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2963629 ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 2963629 ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2963629' 00:46:51.480 killing process with pid 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 2963629 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:51.480 17:00:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:53.413 17:00:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:53.413 00:46:53.413 real 0m23.484s 00:46:53.413 user 1m1.823s 00:46:53.413 sys 0m4.929s 00:46:53.413 17:00:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:53.413 17:00:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:46:53.413 ************************************ 00:46:53.413 END TEST nvmf_bdevperf 00:46:53.413 ************************************ 00:46:53.413 17:00:12 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:46:53.413 17:00:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:46:53.414 17:00:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:53.414 17:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:53.414 ************************************ 00:46:53.414 START TEST nvmf_target_disconnect 00:46:53.414 ************************************ 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:46:53.414 * Looking for test storage... 00:46:53.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:46:53.414 17:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:55.942 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:46:55.943 Found 0000:82:00.0 (0x8086 - 0x159b) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:46:55.943 Found 0000:82:00.1 (0x8086 - 0x159b) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:46:55.943 Found net devices under 0000:82:00.0: cvl_0_0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:46:55.943 Found net devices under 0000:82:00.1: cvl_0_1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:55.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:55.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:46:55.943 00:46:55.943 --- 10.0.0.2 ping statistics --- 00:46:55.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.943 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:55.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:55.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:46:55.943 00:46:55.943 --- 10.0.0.1 ping statistics --- 00:46:55.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.943 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:46:55.943 ************************************ 00:46:55.943 START TEST nvmf_target_disconnect_tc1 00:46:55.943 ************************************ 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:46:55.943 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:55.943 EAL: No free 2048 kB hugepages reported on node 1 00:46:55.943 [2024-07-22 17:00:15.500032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:55.943 [2024-07-22 17:00:15.500121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42520 with addr=10.0.0.2, port=4420 00:46:55.943 [2024-07-22 17:00:15.500166] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:46:55.943 [2024-07-22 17:00:15.500196] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:55.943 [2024-07-22 17:00:15.500212] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:46:55.943 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:46:55.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:46:55.944 Initializing NVMe Controllers 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:46:55.944 00:46:55.944 real 0m0.110s 00:46:55.944 user 0m0.038s 00:46:55.944 sys 0m0.071s 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:46:55.944 ************************************ 00:46:55.944 END TEST nvmf_target_disconnect_tc1 00:46:55.944 ************************************ 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:46:55.944 ************************************ 00:46:55.944 START TEST nvmf_target_disconnect_tc2 00:46:55.944 ************************************ 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2967683 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2967683 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2967683 ']' 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:55.944 17:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:56.202 [2024-07-22 17:00:15.614443] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:56.203 [2024-07-22 17:00:15.614524] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:56.203 EAL: No free 2048 kB hugepages reported on node 1 00:46:56.203 [2024-07-22 17:00:15.697263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:56.203 [2024-07-22 17:00:15.794206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:56.203 [2024-07-22 17:00:15.794270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:56.203 [2024-07-22 17:00:15.794288] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:56.203 [2024-07-22 17:00:15.794302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:56.203 [2024-07-22 17:00:15.794314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:56.203 [2024-07-22 17:00:15.794404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:46:56.203 [2024-07-22 17:00:15.794459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:46:56.203 [2024-07-22 17:00:15.794522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:46:56.203 [2024-07-22 17:00:15.794525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 Malloc0 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 [2024-07-22 17:00:16.609812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 [2024-07-22 17:00:16.638060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2967812 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:57.135 17:00:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:46:57.135 EAL: No free 2048 kB hugepages reported on node 1 00:46:59.036 17:00:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2967683 00:46:59.036 17:00:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 [2024-07-22 17:00:18.663375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 [2024-07-22 17:00:18.663723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Read completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.036 starting I/O failed 00:46:59.036 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 [2024-07-22 17:00:18.664150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Read completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 Write completed with error (sct=0, sc=8) 00:46:59.037 starting I/O failed 00:46:59.037 [2024-07-22 17:00:18.664461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:46:59.037 [2024-07-22 17:00:18.664693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.664730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.664888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.664913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.665093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.665121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.665310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.665333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.665489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.665513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.665669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.665692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.665873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.665897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.666066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.666239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.666393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.666572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.666818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.666997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.667039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.667181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.667207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.667364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.667386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.667509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.667539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.037 [2024-07-22 17:00:18.667694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.037 [2024-07-22 17:00:18.667733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.037 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.667910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.667938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.668150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.668344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.668486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.668689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.668870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.668988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.669140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.669332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.669494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.669693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.669894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.669918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.670091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.670232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.670445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.670657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.670833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.670991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.671035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.671206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.671232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.671433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.671457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.671629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.671652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.671818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.671847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.672059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.672275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.672482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.672661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.672845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.672974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.673166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.673346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.673478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.673691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.673894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.673919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.674059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.674085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.674234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.674275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.674416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.674439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.674568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.038 [2024-07-22 17:00:18.674597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.038 qpair failed and we were unable to recover it. 00:46:59.038 [2024-07-22 17:00:18.674772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.674800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.674992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.675039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.675207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.675233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.675416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.675440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.675626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.675650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.675789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.675818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.675995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.676024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.676237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.676289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.676464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.676489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.676646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.676670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.676812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.676836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.677942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.677976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.678145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.678170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.678347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.678370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.678525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.678548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.678691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.678719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.678870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.678898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.679073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.679112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.679267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.679296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.679539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.679567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.679743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.679912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.679941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.680139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.680166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.680309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.680339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.680511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.680536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.680696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.680720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.680852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.680890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.681037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.681064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.681234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.681273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.681440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.681489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.681676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.681702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.681887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.681913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.039 qpair failed and we were unable to recover it. 00:46:59.039 [2024-07-22 17:00:18.682053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.039 [2024-07-22 17:00:18.682080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.682267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.682292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.682482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.682505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.682697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.682721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.682866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.682890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.683083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.683132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.683306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.683346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.040 [2024-07-22 17:00:18.683503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.040 [2024-07-22 17:00:18.683529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.040 qpair failed and we were unable to recover it. 00:46:59.316 [2024-07-22 17:00:18.683703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.316 [2024-07-22 17:00:18.683728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.316 qpair failed and we were unable to recover it. 00:46:59.316 [2024-07-22 17:00:18.683884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.316 [2024-07-22 17:00:18.683909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.316 qpair failed and we were unable to recover it. 00:46:59.316 [2024-07-22 17:00:18.684080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.316 [2024-07-22 17:00:18.684106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.316 qpair failed and we were unable to recover it. 00:46:59.316 [2024-07-22 17:00:18.684266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.316 [2024-07-22 17:00:18.684290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.316 qpair failed and we were unable to recover it. 00:46:59.316 [2024-07-22 17:00:18.684434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.316 [2024-07-22 17:00:18.684457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.684598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.684621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.684772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.684796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.684938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.684985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.685157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.685183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.685329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.685352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.685482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.685525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.685667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.685690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.685830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.685854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.686039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.686238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.686429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.686632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.686792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.686980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.687123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.687313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.687491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.687688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.687851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.687874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.688077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.688102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.688278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.688301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.688482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.688504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.688669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.688693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.688863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.688885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.689059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.689225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.689443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.689606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.689782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.689962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.690157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.690351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.690545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.690720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.690929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.690953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.691108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.691131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.691293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.691319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.691530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.691557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.691707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.691733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.691941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.691973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.692172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.692198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.692396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.692420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.692556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.692580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.692735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.692759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.692940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.692996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.693175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.693373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.693565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.693703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.693875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.693979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.694116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.694279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.694452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.694638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.694835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.694859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.695949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.695979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.696152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.696175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.696332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.696354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.696479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.317 [2024-07-22 17:00:18.696502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.317 qpair failed and we were unable to recover it. 00:46:59.317 [2024-07-22 17:00:18.696655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.696678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.696867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.696890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.697948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.697992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.698162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.698332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.698495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.698672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.698857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.698997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.699166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.699326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.699506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.699646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.699816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.699840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.700946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.700974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.701156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.701179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.701346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.701368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.701506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.701529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.701659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.701682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.701871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.701894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.702945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.702974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.703110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.703134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.703314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.703336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.703509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.703531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.703697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.703720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.703880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.703903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.704943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.704985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.705115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.705139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.705295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.705322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.705481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.705503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.705655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.705693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.705870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.705892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.706075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.706099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.706263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.706286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.706467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.706490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.706638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.706660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.706836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.706858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.707923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.707946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.708091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.708115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.708265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.318 [2024-07-22 17:00:18.708288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.318 qpair failed and we were unable to recover it. 00:46:59.318 [2024-07-22 17:00:18.708407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.708429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.708581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.708604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.708730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.708752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.708897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.708919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.709934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.709957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.710095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.710119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.710275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.710298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.710477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.710500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.710653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.710691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.710847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.710869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.711915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.711943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.712123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.712147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.712305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.712327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.712501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.712523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.712657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.712680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.712805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.712828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.713900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.713923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.714920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.714958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.715147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.715170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.715313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.715335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.715503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.715526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.715643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.715680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.715823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.715846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.716065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.716256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.716466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.716647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.716782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.716975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.717138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.717336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.717507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.717704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.717894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.717916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.718080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.718103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.718306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.718328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.718470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.718492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.319 qpair failed and we were unable to recover it. 00:46:59.319 [2024-07-22 17:00:18.718654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.319 [2024-07-22 17:00:18.718677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.718850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.718873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.719958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.719988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.720174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.720197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.720370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.720393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.720553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.720576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.720690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.720713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.720895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.720918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.721969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.721993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.722130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.722153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.722289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.722326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.722469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.722495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.722683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.722706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.722870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.722898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.723073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.723097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.723290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.723313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.723482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.723504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.723653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.723676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.723857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.723879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.724877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.724901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.725094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.725251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.725432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.725627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.725824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.725999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.726179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.726351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.726560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.726726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.726900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.726923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.727122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.727147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.727270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.727292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.727467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.727494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.727649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.727672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.727849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.727871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.728900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.728922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.729104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.729127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.729310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.729333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.729476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.729498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.729647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.729670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.729854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.729876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.730940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.730967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.731127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.731150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.731310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.320 [2024-07-22 17:00:18.731332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.320 qpair failed and we were unable to recover it. 00:46:59.320 [2024-07-22 17:00:18.731468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.731491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.731628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.731650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.731797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.731819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.731978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.732146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.732300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.732479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.732654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.732836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.732859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.733872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.733894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.734044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.734068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.734258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.734280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.734436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.734459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.734625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.734648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.734816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.734838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.735870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.735907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.736888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.736910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.737099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.737123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.737269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.737292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.737429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.737451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.737617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.737640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.737795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.737818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.738860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.738883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.739092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.739252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.739470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.739634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.739845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.739990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.740134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.740309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.740481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.740681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.740855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.740878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.741054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.741255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.741425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.741643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.741807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.741981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.742165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.742342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.742514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.742682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.742893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.742916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.743106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.743130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.743281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.743304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.743478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.743500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.743675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.743698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.743864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.321 [2024-07-22 17:00:18.743886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.321 qpair failed and we were unable to recover it. 00:46:59.321 [2024-07-22 17:00:18.744014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.744223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.744404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.744566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.744782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.744974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.744997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.745191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.745214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.745323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.745346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.745499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.745536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.745668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.745705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.745881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.745904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.746957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.746986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.747134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.747157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.747297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.747334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.747505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.747528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.747702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.747725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.747873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.747895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.748068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.748260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.748450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.748647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.748824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.748985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.749824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.749987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.750026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.750210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.750233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.750395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.750417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.750592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.750614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.750760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.750782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.750990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.751152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.751314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.751531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.751687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.751858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.751881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.752026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.752065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.752229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.752265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.752448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.752470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.752650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.752672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.752834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.752856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.753065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.753265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.753450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.753646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.753827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.753972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.754160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.754333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.754541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.754717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.754919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.754942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.755094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.755116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.755232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.755254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.755437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.322 [2024-07-22 17:00:18.755460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.322 qpair failed and we were unable to recover it. 00:46:59.322 [2024-07-22 17:00:18.755574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.755611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.755757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.755780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.755927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.755951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.756161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.756289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.756441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.756617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.756814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.756975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.757141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.757314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.757497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.757649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.757826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.757849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.758917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.758939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.759094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.759117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.759279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.759302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.759488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.759510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.759663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.759701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.759851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.759874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.760907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.760934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.761138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.761357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.761517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.761694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.761864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.761977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.762191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.762377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.762569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.762769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.762956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.762993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.763143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.763165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.763351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.763373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.763531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.763554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.763726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.763747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.763927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.763970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.764140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.764164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.764339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.764360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.764549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.764572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.764738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.764760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.764934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.764956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.765133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.765156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.765293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.765315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.765478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.765514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.765637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.765660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.765840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.765878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.766882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.766911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.767948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.767991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.768147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.768170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.768344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.323 [2024-07-22 17:00:18.768367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.323 qpair failed and we were unable to recover it. 00:46:59.323 [2024-07-22 17:00:18.768557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.768580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.768721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.768743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.768893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.768916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.769072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.769096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.769236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.769274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.769433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.769470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.769655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.769680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.769821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.769845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.770870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.770997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.771188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.771540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.771708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.771843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.771866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.772910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.772949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.773063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.773086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.773241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.773265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.773440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.773462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.773625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.773647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.773821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.773845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.774010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.774034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.774200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.774224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.774413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.774437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.774612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.774635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.774789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.774812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.775894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.775918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.776073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.776097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.776255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.776278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.776433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.776456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.776633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.776665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.776836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.776859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.777091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.777264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.777453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.777633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.777810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.777984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.778829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.778978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.779184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.779364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.779559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.779747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.779925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.779948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.780127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.780150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.780366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.324 [2024-07-22 17:00:18.780389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.324 qpair failed and we were unable to recover it. 00:46:59.324 [2024-07-22 17:00:18.780538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.780561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.780700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.780737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.780912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.780936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.781094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.781118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.781284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.781307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.781457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.781480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.781663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.781688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.781828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.781852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.782014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.782039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.782180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.782204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.782379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.782401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.782631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.782654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.782817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.782841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.783897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.783922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.784157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.784185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.784351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.784374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.784562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.784585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.784733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.784756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.784932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.784955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.785145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.785169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.785324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.785346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.785522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.785545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.785701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.785724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.785874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.785911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.786105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.786128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.786286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.786309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.786450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.786487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.786664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.786687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.786846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.786870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.787907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.787931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.788075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.788114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.788280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.788303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.788490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.788513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.788629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.788667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.788840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.788863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.789012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.789036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.789219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.789242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.789427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.789450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.789681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.789704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.789842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.789864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.790914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.790937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.791082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.791106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.791267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.791290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.791463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.791485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.791660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.791687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.791837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.791860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.792919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.792943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.793112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.793136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.793300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.793323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.793519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.793543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.793680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.793703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.793879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.793902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.325 qpair failed and we were unable to recover it. 00:46:59.325 [2024-07-22 17:00:18.794029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.325 [2024-07-22 17:00:18.794053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.794220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.794258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.794429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.794452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.794577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.794600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.794794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.794817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.794957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.794986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.795123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.795146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.795298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.795334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.795482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.795505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.795658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.795696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.795859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.795887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.796908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.796931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.797091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.797290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.797466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.797636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.797838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.797998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.798174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.798359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.798553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.798725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.798933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.798959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.799167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.799191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.799324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.799347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.799506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.799529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.799671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.799708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.799871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.799894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.800972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.800996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.801131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.801154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.801300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.801337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.801521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.801544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.801699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.801722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.801887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.801914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.802975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.802998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.803154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.803177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.803342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.803364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.803508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.803544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.803691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.803714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.803907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.803930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.804127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.804161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.804340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.804369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.804564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.804597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.804761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.804819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.805011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.805035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.805217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.805255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.805394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.805417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.805605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.805629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.805792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.805832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.806927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.806950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.807115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.807154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.326 qpair failed and we were unable to recover it. 00:46:59.326 [2024-07-22 17:00:18.807316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.326 [2024-07-22 17:00:18.807339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.807494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.807518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.807650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.807679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.807796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.807824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.807937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.807971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.808102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.808126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.808281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.808305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.808458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.808481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.808593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.808621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.808789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.808818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.809957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.809990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.810167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.810336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.810455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.810632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.810813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.810988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.811172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.811352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.811564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.811739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.811897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.811925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.812108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.812286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.812476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.812683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.812839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.812991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.813145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.813355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.813543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.813754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.813938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.813981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.814139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.814162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.814305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.814328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.814519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.814543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.814655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.814692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.814880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.814904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.815967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.815993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.816133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.816158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.816337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.816360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.816510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.816534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.816706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.816730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.816879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.816907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.817944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.817980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.818158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.818182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.818330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.818354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.818530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.818553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.818738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.818766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.818973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.818999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.327 [2024-07-22 17:00:18.819175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.327 [2024-07-22 17:00:18.819199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.327 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.819338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.819362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.819540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.819564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.819728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.819751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.819900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.819924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.820108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.820133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.820302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.820339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.820487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.820511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.820674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.820712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.820881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.820904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.821062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.821088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.821233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.821271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.821450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.821472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.821654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.821678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.821839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.821862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.822903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.822940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.823110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.823135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.823274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.823297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.823489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.823512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.823652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.823675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.823829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.823852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.824970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.824994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.825178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.825203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.825377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.825399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.825551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.825574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.825758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.825781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.825920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.825948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.826149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.826174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.826341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.826368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.826495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.826518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.826692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.826715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.826870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.826893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.827957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.827985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.828144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.828168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.828271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.828295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.828452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.828491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.828657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.828679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.828809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.828846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.829857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.829981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.830125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.830319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.830526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.830702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.830878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.830901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.831055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.831087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.831274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.831303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.831491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.831514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.831706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.831729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.831904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.831926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.832121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.832145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.832289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.832314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.328 qpair failed and we were unable to recover it. 00:46:59.328 [2024-07-22 17:00:18.832471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.328 [2024-07-22 17:00:18.832494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.832614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.832637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.832814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.832838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.833030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.833231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.833447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.833614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.833814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.833999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.834182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.834367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.834525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.834711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.834904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.834927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.835103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.835127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.835292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.835315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.835497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.835520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.835670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.835708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.835844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.835881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.836926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.836954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.837145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.837169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.837337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.837360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.837479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.837502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.837676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.837700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.837888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.837911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.838975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.838999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.839112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.839136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.329 [2024-07-22 17:00:18.839289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.329 [2024-07-22 17:00:18.839312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.329 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.839465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.839488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.839631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.839668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.839820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.839843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.840971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.840999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.841124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.841148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.841299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.841322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.841500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.841524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.841682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.841721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.841872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.841894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.842106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.842326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.842482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.842655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.842825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.842988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.843167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.843347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.843498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.843671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.843874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.843912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.844851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.844889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.845037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.330 [2024-07-22 17:00:18.845074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.330 qpair failed and we were unable to recover it. 00:46:59.330 [2024-07-22 17:00:18.845238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.845261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.845441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.845464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.845617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.845639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.845795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.845818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.845990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.846172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.846330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.846515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.846850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.846873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.847876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.847899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.848945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.848973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.849138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.849162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.849325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.849347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.849497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.849520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.849701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.849724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.849875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.849897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.850921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.850949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.851126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.851150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.851336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.851359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.851464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.851501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.851679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.851702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.851871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.851894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.852029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.852067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.852196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.852219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.852402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.852440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.852625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.852647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.852802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.852825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.853906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.853929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.854129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.854317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.854459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.854638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.854847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.854978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.855143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.855299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.855489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.855699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.855881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.855904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.856058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.856083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.856264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.856302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.856476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.856498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.856658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.856682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.856844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.856867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.857947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.857996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.858169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.858192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.331 qpair failed and we were unable to recover it. 00:46:59.331 [2024-07-22 17:00:18.858359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.331 [2024-07-22 17:00:18.858382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.858543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.858566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.858708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.858745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.858925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.858948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.859110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.859134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.859276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.859312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.859452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.859475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.859625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.859649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.859820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.859842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.860918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.860956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.861136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.861327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.861506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.861708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.861868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.861998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.862211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.862394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.862582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.862755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.862952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.862981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.863096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.863134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.863298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.863322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.863474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.863497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.863671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.863693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.863836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.863860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.864924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.864962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.865107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.865130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.865273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.865296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.865455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.865478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.865655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.865693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.865832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.865854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.866027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.866204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.866407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.866605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.866816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.866989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.867159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.867349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.867537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.867738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.867918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.867941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.868110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.868133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.868295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.868319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.868499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.868523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.868703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.868725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.868910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.868933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.869094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.869118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.869285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.869323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.869472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.869495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.869678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.869715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.869855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.869878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.870034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.870058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.870198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.870225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.332 [2024-07-22 17:00:18.870415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.332 [2024-07-22 17:00:18.870438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.332 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.870584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.870607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.870794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.870817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.870967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.870990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.871166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.871189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.871369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.871392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.871538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.871560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.871713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.871736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.871916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.871939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.872123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.872146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.872284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.872307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.872467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.872490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.872632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.872669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.872825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.872848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.873896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.873920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.874087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.874112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.874263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.874286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.874441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.874464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.874623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.874647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.874822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.874844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.875903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.875926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.876078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.876102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.876281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.876305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.876471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.876493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.876647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.876671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.876848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.876871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.877044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.877081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.877211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.877234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.877410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.877433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.877585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.877611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.877788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.877811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.878918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.878942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.879127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.879151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.879334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.879357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.879511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.879534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.879711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.879734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.879876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.879899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.880074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.880212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.880428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.880610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.880818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.880960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.881165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.881334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.881519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.881707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.881869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.881893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.882078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.882102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.882253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.882292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.882465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.882502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.882698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.882721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.882872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.882894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.883105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.883301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.883451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.883661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.333 [2024-07-22 17:00:18.883847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.333 qpair failed and we were unable to recover it. 00:46:59.333 [2024-07-22 17:00:18.883995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.884033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.884186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.884209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.884365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.884402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.884565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.884588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.884774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.884797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.884982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.885150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.885344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.885529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.885727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.885910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.885932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.886131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.886156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.886303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.886341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.886515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.886538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.886675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.886712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.886861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.886884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.887971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.887997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.888138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.888163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.888321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.888344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.888522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.888545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.888702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.888725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.888898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.888920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.889119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.889144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.889310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.889333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.889483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.889506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.889680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.889702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.889850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.889873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.890060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.890084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.890277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.890301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.890472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.890494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.890647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.890670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.890819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.890856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.891944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.891970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.892148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.892172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.892313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.892337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.892516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.892539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.892678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.892704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.892868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.892891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.893914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.893938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.894160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.894183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.894370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.894393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.894548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.894571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.894743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.894767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.894933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.894956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.895121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.895145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.895315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.895338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.895501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.895525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.895698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.895720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.895873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.895897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.896074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.896098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.896253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.896277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.896456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.896479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.896658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.896680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.896876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.896899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.897081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.897106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.897212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.897250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.897385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.897408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.334 qpair failed and we were unable to recover it. 00:46:59.334 [2024-07-22 17:00:18.897564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.334 [2024-07-22 17:00:18.897588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.897799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.897823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.897967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.898164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.898359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.898574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.898739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.898942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.898969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.899191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.899224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.899381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.899405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.899670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.899692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.899869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.899892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.900092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.900116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.900332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.900357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.900571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.900597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.900758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.900781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.900950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.900978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.901105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.901128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.901300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.901337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.901594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.901617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.901811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.901834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.901975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.901999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.902186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.902210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.902360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.902383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.902524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.902548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.902704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.902742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.902908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.902931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.903092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.903117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.903253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.903290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.903432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.903455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.903728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.903750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.903859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.903882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.904099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.904124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.904305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.904330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.904528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.904551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.904816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.904839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.905074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.905098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.905278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.905301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.905455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.905477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.905721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.905749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.905986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.906019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.906173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.906210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.906433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.906456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.906612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.906635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.906809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.906832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.907023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.907061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.907187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.907214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.907417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.907446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.907710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.907733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.908031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.908055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.908259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.908295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.908507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.908530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.908683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.908706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.908881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.908904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.909132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.909156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.909290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.909318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.909512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.909536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.909695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.909718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.909846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.909884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.910015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.910040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.910176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.910200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.910386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.910413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.910667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.910690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.911002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.911027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.911208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.911236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.335 [2024-07-22 17:00:18.911408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.335 [2024-07-22 17:00:18.911436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.335 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.911630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.911653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.911798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.911820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.912052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.912078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.912222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.912262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.912464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.912486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.912673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.912696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.912887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.912910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.913044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.913078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.913258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.913281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.913491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.913525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.913687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.913710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.913894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.913917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.914067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.914106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.914245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.914269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.914432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.914455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.914692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.914735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.914859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.914883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.915089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.915115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.915321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.915344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.915493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.915518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.915657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.915695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.915869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.915893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.916926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.916951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.917131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.917155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.917310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.917348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.917506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.917546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.917724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.917749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.917887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.917924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.918111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.918135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.918313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.918337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.918477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.918502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.918638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.918662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.918852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.918891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.919034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.919059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.919254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.919279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.919434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.919458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.919698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.919722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.919869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.919893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.920144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.920287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.920461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.920633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.920798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.920975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.921001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.921184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.921207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.921415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.921438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.921594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.921632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.921794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.921818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.922972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.922996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.923133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.923157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.923292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.923317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.923450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.923475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.923601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.923625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.923887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.923911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.924952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.336 [2024-07-22 17:00:18.924981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.336 qpair failed and we were unable to recover it. 00:46:59.336 [2024-07-22 17:00:18.925098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.925122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.925276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.925301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.925486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.925523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.925638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.925661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.925821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.925845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.926055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.926080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.926235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.926259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.926433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.926457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.926655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.926678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.926921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.926946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.927089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.927113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.927273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.927302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.927467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.927490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.927645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.927669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.927877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.927901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.928945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.928989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.930338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.930372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.930567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.930591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.930742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.930768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.930975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.931168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.931303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.931465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.931734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.931901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.931925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.932987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.933887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.933912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.934896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.934921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.935941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.935988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.936131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.936295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.936485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.936665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.936835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.936979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.937006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.939766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.939797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.337 qpair failed and we were unable to recover it. 00:46:59.337 [2024-07-22 17:00:18.940874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.337 [2024-07-22 17:00:18.940902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.941915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.941939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.942880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.942905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.943892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.943917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.944083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.944220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.944430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.944621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.944806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.944977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.945144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.945396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.945566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.945716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.945889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.945914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.946773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.946817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.338 [2024-07-22 17:00:18.946980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.338 [2024-07-22 17:00:18.947024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.338 qpair failed and we were unable to recover it. 00:46:59.608 [2024-07-22 17:00:18.947137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.608 [2024-07-22 17:00:18.947163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.608 qpair failed and we were unable to recover it. 00:46:59.608 [2024-07-22 17:00:18.947303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.608 [2024-07-22 17:00:18.947328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.608 qpair failed and we were unable to recover it. 00:46:59.608 [2024-07-22 17:00:18.947503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.608 [2024-07-22 17:00:18.947528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.608 qpair failed and we were unable to recover it. 00:46:59.608 [2024-07-22 17:00:18.947684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.608 [2024-07-22 17:00:18.947709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.947819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.947845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.947955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.947989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.948175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.948360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.948544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.948715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.948854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.948991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.949987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.950138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.950289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.950460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.950652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.950840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.950984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.951968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.951992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.952908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.952933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.953092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.953235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.953409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.953565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.609 [2024-07-22 17:00:18.953726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.609 qpair failed and we were unable to recover it. 00:46:59.609 [2024-07-22 17:00:18.953878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.953902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.954909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.954938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.955914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.955938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.956091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.956119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.956236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.956276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.956470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.956494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.956655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.956681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.956848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.956871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.957972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.957997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.958132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.958158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.958284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.958309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.958481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.958505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.958655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.958679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.958845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.958872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.959900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.959923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.960069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.960095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.610 [2024-07-22 17:00:18.960207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.610 [2024-07-22 17:00:18.960232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.610 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.960370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.960395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.960507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.960534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.960671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.960697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.960797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.960822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.960983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.961139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.961337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.961478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.961670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.961877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.961916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.962065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.962091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.962219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.962259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.962425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.962467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.962635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.962660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.962809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.962848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.963849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.963876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.964930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.964959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.965188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.965214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.965397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.965449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.965592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.965640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.965800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.965828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.965977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.966126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.966298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.966482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.966676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.966870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.611 [2024-07-22 17:00:18.966903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.611 qpair failed and we were unable to recover it. 00:46:59.611 [2024-07-22 17:00:18.967031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.967057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.967200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.967226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.967442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.967481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.967677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.967706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.967845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.967874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.968023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.968051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.968194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.968219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.968346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.968389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.968539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.968571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.968795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.968824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.969051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.969077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.969227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.969260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.969435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.969465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.969691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.969719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.969888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.969915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.970927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.970955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.971199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.971226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.971431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.971456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.971604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.971629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.971775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.971804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.971984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.972168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.972334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.972516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.972693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.972878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.972905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.973087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.973242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.973435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.973639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.973809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.973960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.974014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.974230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.612 [2024-07-22 17:00:18.974256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.612 qpair failed and we were unable to recover it. 00:46:59.612 [2024-07-22 17:00:18.974417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.974442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.974581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.974605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.974769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.974798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.974926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.974954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.975086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.975112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.975262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.975287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.975418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.975447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.975618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.975649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.975809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.975838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.976906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.976934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.977095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.977124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.977245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.977285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.977522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.977547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.977710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.977741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.977884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.977913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.978886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.978911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.979087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.979113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.979225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.979253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.979416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.979441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.979567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141a0f0 is same with the state(5) to be set 00:46:59.613 [2024-07-22 17:00:18.979790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.979832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.613 qpair failed and we were unable to recover it. 00:46:59.613 [2024-07-22 17:00:18.980009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.613 [2024-07-22 17:00:18.980038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.980941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.980988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.981947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.981982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.982137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.982163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.982299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.982338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.982469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.982494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.982658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.982683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.982848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.983884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.983912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.984872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.984900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.985878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.985906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.986060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.986088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.986226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.614 [2024-07-22 17:00:18.986267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.614 qpair failed and we were unable to recover it. 00:46:59.614 [2024-07-22 17:00:18.986428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.986457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.986611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.986640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.986785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.986813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.986981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.987920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.987945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.988940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.988973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.989897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.989924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.990834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.990860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.991866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.991892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.992038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.992066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.992212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.992255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.615 [2024-07-22 17:00:18.992404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.615 [2024-07-22 17:00:18.992430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.615 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.992577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.992602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.992731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.992757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.992878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.992903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.993841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.993880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.994853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.994879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.995812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.995851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.996894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.996921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.997874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.997901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.998043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.998069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.998227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.998267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.998444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.998469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.998591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.998617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.616 [2024-07-22 17:00:18.998746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.616 [2024-07-22 17:00:18.998772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.616 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.998919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.998947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.999141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.999300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.999467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.999657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:18.999802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:18.999972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.000013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.000153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.000179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.000341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.000363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.000483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.000506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.000658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.000686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.001693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.001728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.001887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.001918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.002921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.002962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.003163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.003205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.003364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.003390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.003540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.003566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.003702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.003727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.003871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.003896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.004827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.004982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.005134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.005338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.005504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.005691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.005862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.005904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.006045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.006071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.617 [2024-07-22 17:00:19.006213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.617 [2024-07-22 17:00:19.006239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.617 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.006399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.006426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.006605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.006634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.006801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.006834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.006962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.007120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.007315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.007510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.007724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.007875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.007904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.008904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.008932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.009088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.009113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.009231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.009276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.009428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.009457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.009636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.009663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.009814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.009845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.010864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.010903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.011099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.011278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.011489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.011652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.011831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.618 qpair failed and we were unable to recover it. 00:46:59.618 [2024-07-22 17:00:19.011979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.618 [2024-07-22 17:00:19.012006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.012925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.012975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.013908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.013937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.014895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.014921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.015909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.015935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.016902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.016946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.017127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.017153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.017261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.017302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.017470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.017499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.017668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.017697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.017825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.017864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.018013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.018041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.018164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.619 [2024-07-22 17:00:19.018190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.619 qpair failed and we were unable to recover it. 00:46:59.619 [2024-07-22 17:00:19.018350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.018374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.018539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.018569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.018745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.018775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.018909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.018939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.019910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.019935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.020933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.020961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.021946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.021994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.022139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.022166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.022309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.022335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.022462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.022488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.022717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.022746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.022912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.022941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.023084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.023111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.023264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.023293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.023443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.023469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.023647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.023674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.023835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.023864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.620 [2024-07-22 17:00:19.024950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.620 [2024-07-22 17:00:19.024985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.620 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.025114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.025140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.025280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.025306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.025429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.025458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.025619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.025648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.025796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.025827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.026829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.026996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.027132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.027268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.027491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.027663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.027888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.027915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.028876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.028906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.029933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.029962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.030120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.030146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.030290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.030316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.030517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.030556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.030711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.030737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.030933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.030958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.031108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.031134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.031254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.031280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.031414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.031439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.621 qpair failed and we were unable to recover it. 00:46:59.621 [2024-07-22 17:00:19.031654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.621 [2024-07-22 17:00:19.031683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.031810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.031839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.031988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.032143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.032312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.032500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.032677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.032860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.032889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.033019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.033045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.033857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.033899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.034109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.034258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.034443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.034690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.034835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.034981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.035025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.035172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.035199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.035451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.035477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.035657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.035682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.035821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.035847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.036018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.036061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.036239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.036300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.036470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.036506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.036680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.036715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.036935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.036997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.037156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.037189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.037344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.037372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.037511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.037537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.037683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.037709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.037858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.037898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.038043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.038082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.038227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.038269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.622 qpair failed and we were unable to recover it. 00:46:59.622 [2024-07-22 17:00:19.038430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.622 [2024-07-22 17:00:19.038454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.038576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.038602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.038729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.038755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.038899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.038923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.039121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.039149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.039273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.039299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.039470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.039500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.039655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.039686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.039837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.039877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.040923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.040949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.041086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.041111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.041248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.041301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.041476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.041526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.041679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.041710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.041856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.041883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.042035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.042063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.042175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.042201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.042344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.042369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.042624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.042649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.042824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.042853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.043973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.043999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.044132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.044158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.044334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.044363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.044498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.044521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.044755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.044784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.044928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.044957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.045118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.045144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.045306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.045350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.623 qpair failed and we were unable to recover it. 00:46:59.623 [2024-07-22 17:00:19.045529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.623 [2024-07-22 17:00:19.045557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.045671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.045695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.045906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.045935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.046934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.046975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.047107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.047132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.047285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.047324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.047496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.047525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.047709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.047758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.047888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.047922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.048107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.048269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.048465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.048648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.048823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.048987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.049137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.049273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.049471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.049668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.049837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.049866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.050938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.050972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.051105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.051130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.051262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.051288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.051427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.051465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.051614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.051643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.051841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.051882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.052054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.052080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.052201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.624 [2024-07-22 17:00:19.052227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.624 qpair failed and we were unable to recover it. 00:46:59.624 [2024-07-22 17:00:19.052352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.052378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.052575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.052600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.052716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.052745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.052934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.052981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.053094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.053120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.053242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.053282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.053463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.053499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.053697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.053722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.053854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.053879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.054866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.054891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.055078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.055105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.055223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.055249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.055425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.055454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.055636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.055660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.055841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.055866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.056862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.056886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.057899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.057924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.058090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.058233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.058406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.058567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.625 [2024-07-22 17:00:19.058713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.625 qpair failed and we were unable to recover it. 00:46:59.625 [2024-07-22 17:00:19.058866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.058892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.059871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.059993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.060135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.060371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.060554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.060701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.060863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.060888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.061915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.061984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.062146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.062173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.062317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.062343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.062509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.062535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.062645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.062672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.062820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.062846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.063969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.063996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.064135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.064161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.064305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.064348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.064491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.064520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.064642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.064683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.064846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.064887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.065023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.065050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.626 [2024-07-22 17:00:19.065165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.626 [2024-07-22 17:00:19.065191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.626 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.065347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.065390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.065526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.065573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.065719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.065743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.065885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.065927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.066954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.066992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.067150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.067176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.067328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.067351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.067524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.067565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.067752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.067781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.067957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.067994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.068171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.068200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.068390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.068419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.068611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.068634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.068834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.068863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.069007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.069049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.069176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.069202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.069418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.069446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.069631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.069677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.069852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.069875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.070944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.070990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.071116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.071142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.627 qpair failed and we were unable to recover it. 00:46:59.627 [2024-07-22 17:00:19.071263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.627 [2024-07-22 17:00:19.071288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.071421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.071444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.071667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.071705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.071853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.071881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.072888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.072916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.073908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.073936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.074917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.074945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.075081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.075107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.075254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.075279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.075416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.075449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.075606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.075629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.075745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.075769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.076874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.076897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.077042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.077069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.077207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.077232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.077397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.077426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.077659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.077706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.077851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.628 [2024-07-22 17:00:19.077884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.628 qpair failed and we were unable to recover it. 00:46:59.628 [2024-07-22 17:00:19.078055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.078081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.078214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.078238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.078417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.078440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.078662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.078691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.078840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.078869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.079098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.079124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.079269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.079297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.079483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.079510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.079646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.079668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.079853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.079881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.080099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.080125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.080235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.080274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.080487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.080516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.080687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.080716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.080878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.080906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.081053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.081078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.081216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.081256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.081394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.081418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.081671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.081699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.081844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.081871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.082033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.082060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.082195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.082221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.082452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.082481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.082659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.082682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.082802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.082844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.083040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.083210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.083403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.083638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.083808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.083979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.084005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.084224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.084274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.084425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.084458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.084639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.084667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.084843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.084871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.084989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.085028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.085194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.085219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.629 [2024-07-22 17:00:19.085353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.629 [2024-07-22 17:00:19.085381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.629 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.085613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.085637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.085783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.085811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.085995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.086177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.086325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.086507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.086722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.086946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.086981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.087145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.087170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.087306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.087329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.087524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.087553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.087692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.087720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.087857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.087880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.088057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.088192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.088429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.088686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.088842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.088986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.089175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.089340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.089516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.089711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.089838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.089861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.090917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.090939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.091099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.091126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.091279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.091303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.091418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.091457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.091620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.091644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.091867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.091891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.092014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.092040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.092149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.092175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.630 qpair failed and we were unable to recover it. 00:46:59.630 [2024-07-22 17:00:19.092324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.630 [2024-07-22 17:00:19.092348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.092465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.092502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.092637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.092660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.092836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.092860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.092975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.093145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.093279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.093447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.093681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.093850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.093874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.094066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.094229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.094501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.094639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.094816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.094990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.095030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.095191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.095216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.095433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.095461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.095632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.095660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.095807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.095834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.095993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.096032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.096166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.096190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.096423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.096451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.096619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.096647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.096844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.096873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.097057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.097220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.097429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.097623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.097823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.097992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.098163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.098349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.098563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.098769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.098917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.098940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.099181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.099206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.099378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.631 [2024-07-22 17:00:19.099410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.631 qpair failed and we were unable to recover it. 00:46:59.631 [2024-07-22 17:00:19.099588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.099616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.099800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.099828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.100048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.100205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.100472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.100665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.100834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.100992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.101144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.101308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.101486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.101738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.101957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.101999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.102172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.102342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.102516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.102658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.102851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.102975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.103181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.103352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.103518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.103697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.103931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.103960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.104102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.104126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.104253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.104294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.104400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.104428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.104648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.104676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.104812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.104840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.632 [2024-07-22 17:00:19.105031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.632 [2024-07-22 17:00:19.105056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.632 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.105169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.105192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.105332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.105373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.105611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.105639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.105855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.105892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.106088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.106112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.106258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.106280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.106437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.106460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.106639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.106667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.106806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.106834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.107830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.107858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.108829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.108857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.109073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.109119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.109268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.109293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.109480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.109523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.109658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.109700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.109901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.109945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.110142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.110183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.110375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.110404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.110557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.110588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.110782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.110810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.110960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.111020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.111188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.111212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.111427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.111450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.111609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.111637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.111864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.111892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.112034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.112072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.112198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.633 [2024-07-22 17:00:19.112222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.633 qpair failed and we were unable to recover it. 00:46:59.633 [2024-07-22 17:00:19.112376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.112404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.112530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.112558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.112736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.112764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.112933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.112961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.113119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.113143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.113338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.113366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.113517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.113545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.113742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.113770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.113913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.113941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.114125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.114150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.114264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.114302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.114470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.114498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.114649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.114678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.114839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.114868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.115042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.115066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.115217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.115257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.115507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.115530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.115708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.115755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.115950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.115983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.116110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.116134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.116296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.116337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.116514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.116542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.116709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.116738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.116924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.116952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.117100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.117124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.117235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.117259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.117502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.117533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.117685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.117713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.117911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.117944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.118116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.118140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.118340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.118369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.118551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.118574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.118685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.118722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.118917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.118949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.119144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.119168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.119289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.119329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.119492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.119520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.634 qpair failed and we were unable to recover it. 00:46:59.634 [2024-07-22 17:00:19.119656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.634 [2024-07-22 17:00:19.119693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.119834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.119871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.120053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.120258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.120407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.120663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.120840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.120956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.121138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.121342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.121565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.121771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.121973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.121997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.122115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.122155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.122288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.122316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.122478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.122501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.122661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.122689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.122851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.122879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.123110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.123286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.123487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.123621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.123790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.123958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.124144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.124289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.124474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.124688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.124896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.124925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.125065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.125089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.125245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.125283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.125411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.125439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.125622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.125650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.125869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.125892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.126032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.126061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.126202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.126231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.126445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.126471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.126620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.635 [2024-07-22 17:00:19.126662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.635 qpair failed and we were unable to recover it. 00:46:59.635 [2024-07-22 17:00:19.126885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.126913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.127071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.127096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.127232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.127272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.127438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.127466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.127699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.127722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.127901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.127928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.128106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.128130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.128272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.128295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.128481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.128517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.128683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.128711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.128907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.128930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.129095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.129136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.129261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.129289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.129433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.129470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.129650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.129678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.129828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.129856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.130024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.130049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.130176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.130218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.130422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.130457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.130630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.130653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.130844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.130872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.131016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.131045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.131272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.131296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.131455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.131484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.131661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.131689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.131889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.131912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.132114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.132143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.132333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.132361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.132545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.132568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.132801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.132828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.132987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.133016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.636 qpair failed and we were unable to recover it. 00:46:59.636 [2024-07-22 17:00:19.133148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.636 [2024-07-22 17:00:19.133172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.133411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.133439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.133590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.133618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.133845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.133873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.134067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.134092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.134227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.134274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.134452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.134474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.134693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.134726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.134867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.134896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.135064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.135088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.135256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.135285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.135484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.135512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.135637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.135675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.135855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.135883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.136020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.136048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.136195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.136219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.136465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.136493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.136640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.136668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.136801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.136839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.137034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.137218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.137403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.137596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.137809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.137999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.138154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.138333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.138525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.138776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.138924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.138960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.139155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.139179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.139386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.139414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.139551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.139579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.139769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.139792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.139919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.139947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.140147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.140171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.140282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.140320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.637 [2024-07-22 17:00:19.140419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.637 [2024-07-22 17:00:19.140442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.637 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.140613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.140641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.140803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.140825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.141029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.141083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.141231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.141270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.141451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.141474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.141618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.141646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.141815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.141844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.142044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.142219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.142449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.142672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.142854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.142979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.143022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.143196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.143220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.143381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.143403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.143590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.143618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.143833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.143856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.144875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.144903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.145091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.145267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.145529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.145673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.145877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.145993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.146018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.146137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.146161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.146368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.146397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.146544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.146567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.146843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.146872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.147028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.147057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.147264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.147293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.147460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.147488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.147644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.638 [2024-07-22 17:00:19.147680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.638 qpair failed and we were unable to recover it. 00:46:59.638 [2024-07-22 17:00:19.147998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.148023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.148174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.148203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.148379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.148407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.148631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.148654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.148874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.148902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.149101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.149130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.149263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.149294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.149552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.149581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.149711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.149739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.149890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.149919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.150105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.150129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.150340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.150368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.150540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.150566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.150726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.150748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.150951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.150986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.151134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.151158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.151361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.151389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.151557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.151586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.151758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.151781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.151931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.151960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.152272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.152301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.152459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.152482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.152600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.152647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.152868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.152896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.153043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.153069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.153202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.153241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.153422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.153450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.153612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.153642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.153748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.153771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.154002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.154031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.154249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.154287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.154434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.154475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.154662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.154691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.154888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.154911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.155204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.155233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.155450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.155486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.155616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.155639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.155777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.639 [2024-07-22 17:00:19.155800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.639 qpair failed and we were unable to recover it. 00:46:59.639 [2024-07-22 17:00:19.155943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.155981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.156166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.156190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.156350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.156378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.156563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.156591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.156793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.156816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.156978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.157007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.157171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.157199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.157419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.157442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.157666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.157694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.157911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.157949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.158155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.158178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.158345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.158380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.158564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.158592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.158776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.158800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.159035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.159068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.159201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.159229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.159392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.159422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.159616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.159644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.159854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.159882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.160048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.160206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.160381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.160589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.160785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.160982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.161012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.161192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.161216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.161445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.161476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.161632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.161660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.161821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.161844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.162009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.162032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.162208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.162238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.162368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.162410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.162594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.162622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.162817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.162851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.163008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.163031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.163230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.163270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.163431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.163459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.640 qpair failed and we were unable to recover it. 00:46:59.640 [2024-07-22 17:00:19.163656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.640 [2024-07-22 17:00:19.163679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.163897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.163925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.164073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.164098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.164230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.164253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.164495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.164523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.164705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.164733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.164901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.164929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.165102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.165131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.165297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.165325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.165460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.165484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.165621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.165661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.165795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.165822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.166947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.166988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.167131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.167155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.167359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.167387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.167530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.167558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.167702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.167727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.167858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.167881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.168123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.168148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.168339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.168376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.168588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.168616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.168790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.168818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.168988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.169012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.169117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.169156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.169354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.169382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.169524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.169549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.641 [2024-07-22 17:00:19.169689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.641 [2024-07-22 17:00:19.169729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.641 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.169878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.169906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.170952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.170987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.171142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.171166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.171384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.171412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.171560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.171588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.171810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.171839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.171985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.172013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.172242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.172288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.172461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.172501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.172648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.172691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.172836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.172863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.172988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.173012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.173157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.173184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.173412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.173441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.173554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.173578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.173714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.173739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.173958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.174023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.174189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.174216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.174408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.174436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.174588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.174624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.174801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.174837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.175009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.175035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.175267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.175295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.175470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.175494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.175711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.175739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.175918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.175948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.176096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.176121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.176330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.176374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.176517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.176544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.176734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.176758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.176988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.177029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.177204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.177229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.642 [2024-07-22 17:00:19.177374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.642 [2024-07-22 17:00:19.177398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.642 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.177554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.177582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.177695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.177726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.177898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.177926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.178082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.178106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.178268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.178296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.178460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.178484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.178707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.178736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.178884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.178910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.179074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.179098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.179245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.179285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.179489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.179517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.179675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.179700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.179906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.179935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.180971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.180996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.181226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.181250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.181415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.181438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.181666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.181693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.181816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.181848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.182916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.182944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.183108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.183137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.183270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.183293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.183468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.183496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.183658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.183682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.183944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.183996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.184177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.184201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.184321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.184344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.184488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.184527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.643 [2024-07-22 17:00:19.184705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.643 [2024-07-22 17:00:19.184734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.643 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.184885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.184909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.185926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.185973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.186105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.186129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.186352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.186380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.186519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.186559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.186682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.186706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.186865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.186892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.187878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.187912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.188086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.188260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.188462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.188649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.188849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.188984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.189015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.189222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.189261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.189423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.189451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.189631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.189659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.189827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.189850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.190007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.190031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.190191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.190218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.190330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.190354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.190595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.190623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.190768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.190795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.191034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.191062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.191209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.191237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.191372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.191403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.191510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.191533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.644 qpair failed and we were unable to recover it. 00:46:59.644 [2024-07-22 17:00:19.191675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.644 [2024-07-22 17:00:19.191712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.191830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.191858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.191987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.192186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.192373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.192573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.192771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.192925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.192957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.193157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.193183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.193297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.193338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.193505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.193532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.193697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.193720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.193840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.193877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.194952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.194987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.195166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.195194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.195371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.195396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.195539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.195561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.195726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.195757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.195885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.195924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.196195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.196223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.196390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.196418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.196543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.196567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.196707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.196731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.196872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.196899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.197829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.197860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.198021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.198047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.198163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.198201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.198363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.198390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.645 qpair failed and we were unable to recover it. 00:46:59.645 [2024-07-22 17:00:19.198515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.645 [2024-07-22 17:00:19.198553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.198698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.198738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.198909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.198936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.199139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.199163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.199324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.199352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.199496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.199525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.199653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.199677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.199826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.199863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.200914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.200937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.201121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.201152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.201305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.201328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.201488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.201526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.201673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.201700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.201877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.201901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.202975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.202999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.203141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.203180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.203318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.203345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.203501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.203526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.203658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.203697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.203876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.203904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.204053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.204078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.204219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.204243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.646 [2024-07-22 17:00:19.204428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.646 [2024-07-22 17:00:19.204457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.646 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.204617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.204641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.204778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.204818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.204980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.205194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.205338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.205525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.205712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.205892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.205920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.206129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.206316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.206487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.206630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.206795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.206958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.207153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.207296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.207434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.207635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.207832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.207854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.208005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.208031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.208253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.208286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.208449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.208472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.208630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.208658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.208817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.208845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.209098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.209236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.209461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.209641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.209877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.209992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.210178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.210420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.210594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.210772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.210944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.210993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.211141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.211168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.211348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.211372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.647 [2024-07-22 17:00:19.211515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.647 [2024-07-22 17:00:19.211543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.647 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.211688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.211715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.211830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.211858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.212971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.212996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.213157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.213181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.213338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.213367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.213528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.213551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.213662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.213685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.213823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.213850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.214024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.214209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.214453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.214636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.214831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.214960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.215148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.215350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.215533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.215688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.215864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.215902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.216086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.216115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.216293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.216317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.216466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.216494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.216650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.216678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.216856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.216881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.217974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.217998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.218137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.218162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.218330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.218358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.648 [2024-07-22 17:00:19.218536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.648 [2024-07-22 17:00:19.218559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.648 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.218744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.218772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.218889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.218918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.219077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.219102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.219259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.219282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.219413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.219442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.219624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.219647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.219807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.219834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.220934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.220991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.221123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.221150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.221285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.221322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.221498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.221541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.221681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.221710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.221834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.221857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.222889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.222913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.223953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.223997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.224164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.224341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.224516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.224659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.224819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.224996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.225020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.225130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.225155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.225328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.649 [2024-07-22 17:00:19.225356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.649 qpair failed and we were unable to recover it. 00:46:59.649 [2024-07-22 17:00:19.225514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.225537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.225718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.225747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.225901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.225932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.226942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.226981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.227127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.227152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.227328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.227356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.227508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.227536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.227710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.227739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.227855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.227880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.228903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.228941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.229962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.229998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.230173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.230200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.230356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.230380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.230549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.230577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.230720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.230747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.230929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.230951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.650 [2024-07-22 17:00:19.231106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.650 [2024-07-22 17:00:19.231133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.650 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.231297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.231325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.231480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.231502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.231671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.231698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.231841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.231868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.231998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.232157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.232357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.232546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.232739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.232915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.232944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.233128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.233287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.233466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.233648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.233842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.233984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.234166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.234333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.234535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.234732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.234905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.234933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.235089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.235244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.235426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.235617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.235812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.235976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.236840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.236999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.237153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.237365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.237533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.237740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.237935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.237989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.238122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.238149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.238349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.238376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.651 [2024-07-22 17:00:19.238517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.651 [2024-07-22 17:00:19.238540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.651 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.238669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.238692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.238878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.238914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.239058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.239084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.239262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.239286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.239462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.239489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.239663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.239685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.239927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.239955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.240140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.240164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.240333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.240356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.240523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.240550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.240670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.240697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.240849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.240873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.241044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.241068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.241229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.241259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.241438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.241463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.241618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.241646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.241872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.241899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.242947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.242980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.243170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.243196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.243422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.243453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.243571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.243598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.243762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.243788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.243949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.243983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.244104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.244131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.244310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.244333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.244531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.244559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.244717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.244744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.244928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.244951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.245125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.245152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.245280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.245311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.652 [2024-07-22 17:00:19.245473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.652 [2024-07-22 17:00:19.245498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.652 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.245643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.245665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.245827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.245852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.245998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.246024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.246197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.246224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.246397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.246424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.246591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.246626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.246812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.246848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.247009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.247038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.247215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.247240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.247424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.247451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.247651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.247679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.247866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.247891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.248105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.248134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.248344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.248373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.248550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.248575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.248804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.248832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.248999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.249027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.249236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.249276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.249419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.249443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.249631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.249662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.249823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.249853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.250089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.250273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.250462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.250587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.250768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.250953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.251009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.251207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.251236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.251401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.251429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.251605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.251628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.251842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.251870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.252022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.252050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.932 qpair failed and we were unable to recover it. 00:46:59.932 [2024-07-22 17:00:19.252173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.932 [2024-07-22 17:00:19.252196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.252405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.252440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.252586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.252613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.252806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.252830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.252955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.252988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.253146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.253173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.253344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.253367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.253558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.253586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.253714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.253742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.253933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.253956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.254100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.254123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.254275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.254302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.254465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.254488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.254629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.254666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.257097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.257128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.257303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.257326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.257544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.257572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.257721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.257749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.257984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.258033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.258174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.258197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.258341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.258385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.258576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.258614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.258792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.258819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.259050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.259277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.259453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.259636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.259844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.259999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.260027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.260191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.260218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.260488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.260693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.260720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.260938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.260986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.261171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.261195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.261315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.261352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.261515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.261542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.261705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.261753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.261982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.933 [2024-07-22 17:00:19.262010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.933 qpair failed and we were unable to recover it. 00:46:59.933 [2024-07-22 17:00:19.262161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.262189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.262423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.262446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.262630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.262678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.262944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.262977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.263119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.263143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.263334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.263356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.263478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.263505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.263716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.263739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.263901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.263929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.264087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.264110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.264313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.264335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.264527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.264554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.264697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.264725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.265005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.265030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.265250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.265277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.265451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.265479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.265642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.265669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.265918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.265946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.266117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.266144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.266342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.266365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.266521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.266549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.266761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.266789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.266948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.266982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.267197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.267225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.267449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.267476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.267607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.267629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.267770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.267793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.268065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.268094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.268238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.268274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.268412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.268452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.268660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.268688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.268863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.268886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.269049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.269091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.269313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.269341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.269496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.269518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.269663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.269704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.269841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.269869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.270091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.270114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.934 qpair failed and we were unable to recover it. 00:46:59.934 [2024-07-22 17:00:19.270265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.934 [2024-07-22 17:00:19.270292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.270478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.270505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.270692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.270715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.270943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.270977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.271136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.271160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.271346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.271369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.271596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.271624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.271771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.271801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.272043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.272067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.272230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.272258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.272476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.272504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.272686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.272708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.272901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.272933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.273183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.273207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.273354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.273376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.273585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.273613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.273780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.273808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.273919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.273942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.274162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.274185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.274346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.274374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.274530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.274553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.274782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.274809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.274956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.274991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.275190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.275213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.275420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.275448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.275638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.275666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.275868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.275891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.276050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.276079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.276200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.276227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.276514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.276537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.276724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.276761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.276894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.276921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.277087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.277111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.277290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.277331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.277537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.277565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.277730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.277759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.277920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.277954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.278116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.278143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.935 [2024-07-22 17:00:19.278316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.935 [2024-07-22 17:00:19.278341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.935 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.278517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.278549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.278717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.278752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.278915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.278942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.279101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.279124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.279348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.279376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.279558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.279580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.279754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.279781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.279912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.279939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.280088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.280112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.280327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.280354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.280491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.280518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.280657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.280680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.280865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.280893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.281071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.281246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.281458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.281637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.281834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.281952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.282003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.282177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.282204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.282400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.282423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.282631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.282659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.282826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.282853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.283936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.283969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.284145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.936 [2024-07-22 17:00:19.284168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.936 qpair failed and we were unable to recover it. 00:46:59.936 [2024-07-22 17:00:19.284359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.284387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.284587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.284614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.284823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.284850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.284999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.285193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.285339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.285508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.285651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.285852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.285891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.286069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.286258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.286471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.286645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.286828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.286992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.287194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.287436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.287604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.287779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.287959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.287992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.288164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.288187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.288345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.288373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.288515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.288543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.288730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.288752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.288928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.288984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.289161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.289188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.289368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.289390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.289539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.289567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.289720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.289746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.289954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.289994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.290202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.290225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.290428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.290456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.290621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.290655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.290840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.290867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.291054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.291078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.291303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.291337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.291473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.291500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.291741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.937 [2024-07-22 17:00:19.291768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.937 qpair failed and we were unable to recover it. 00:46:59.937 [2024-07-22 17:00:19.291870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.291898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.292050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.292227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.292387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.292591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.292772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.292967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.293004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.293173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.293201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.293371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.293397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.293577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.293600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.293812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.293839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.293988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.294017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.294251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.294273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.294434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.294466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.294665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.294692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.294872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.294895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.295936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.295962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.296111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.296134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.296284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.296312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.296521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.296543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.296754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.296782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.296890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.296918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.297098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.297132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.297295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.297336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.297542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.297576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.297701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.297737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.297887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.297928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.298112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.298135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.298352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.298374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.298532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.298559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.298713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.298741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.298912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.298935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.299091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.938 [2024-07-22 17:00:19.299118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.938 qpair failed and we were unable to recover it. 00:46:59.938 [2024-07-22 17:00:19.299260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.299288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.299486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.299509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.299626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.299654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.299799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.299826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.300011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.300035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.300260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.300288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.300459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.300486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.300697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.300719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.300865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.300893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.301049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.301077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.301256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.301278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.301477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.301505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.301679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.301706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.301897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.301919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.302110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.302138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.302310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.302337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.302508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.302534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.302699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.302727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.302937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.303134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.303158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.303348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.303381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.303540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.303568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.303743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.303766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.303870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.303893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.304167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.304193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.304357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.304397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.304545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.304573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.304699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.304727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.304903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.304926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.305165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.305194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.305348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.305381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.305541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.305564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.305664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.305686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.305858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.305886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.306059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.306084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.306260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.306288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.306457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.306484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.306708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.939 [2024-07-22 17:00:19.306731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.939 qpair failed and we were unable to recover it. 00:46:59.939 [2024-07-22 17:00:19.306868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.306908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.307924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.307946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.308134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.308165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.308380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.308402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.308544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.308571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.308744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.308771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.309001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.309026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.309191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.309218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.309421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.309449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.309629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.309651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.309872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.309909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.310059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.310082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.310247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.310284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.310474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.310502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.310674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.310706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.310890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.310913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.311954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.311996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.312181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.312208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.312434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.312462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.312645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.312667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.312889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.312917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.313090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.313113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.313288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.313310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.313514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.313542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.313704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.313731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.313923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.313951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.314137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.314161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.314304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.940 [2024-07-22 17:00:19.314327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.940 qpair failed and we were unable to recover it. 00:46:59.940 [2024-07-22 17:00:19.314499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.314520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.314695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.314742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.314929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.314956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.315120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.315143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.315391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.315419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.315531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.315559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.315736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.315778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.315913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.315941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.316191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.316218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.316391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.316414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.316556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.316583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.316758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.316786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.317029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.317054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.317248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.317276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.317439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.317467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.317622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.317644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.317831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.317858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.318051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.318268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.318457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.318634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.318816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.318992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.319047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.319203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.319230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.319357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.319395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.319666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.319694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.319881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.319909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.320086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.320109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.320255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.320292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.320540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.320568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.320719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.320742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.320985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.321014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.941 [2024-07-22 17:00:19.321118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.941 [2024-07-22 17:00:19.321146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.941 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.321302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.321339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.321504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.321531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.321698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.321730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.321925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.321968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.322165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.322189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.322336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.322363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.322526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.322548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.322720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.322751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.322891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.322919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.323106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.323136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.323268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.323290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.323434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.323462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.323655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.323682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.323850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.323877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.324036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.324060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.324269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.324291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.324468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.324497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.324717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.324745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.324916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.324941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.325109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.325136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.325273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.325302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.325440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.325477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.325652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.325679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.325854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.325882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.326903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.326931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.327084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.327107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.327293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.327333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.327506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.327534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.327751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.327774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.327926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.327956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.328112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.328140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.328323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.328349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.328507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.942 [2024-07-22 17:00:19.328535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.942 qpair failed and we were unable to recover it. 00:46:59.942 [2024-07-22 17:00:19.328718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.328756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.328918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.328945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.329136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.329159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.329339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.329368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.329506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.329542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.329689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.329726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.329885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.329911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.330085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.330110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.330268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.330296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.330418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.330445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.330668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.330691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.330863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.330890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.331065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.331089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.331232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.331269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.331442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.331469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.331634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.331661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.331879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.331902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.332077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.332100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.332270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.332297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.332473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.332495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.332664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.332697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.332866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.332894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.333091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.333126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.333270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.333296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.333476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.333504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.333669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.333691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.333816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.333855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.334063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.334092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.334272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.334294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.334493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.334521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.334698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.334726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.334886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.334908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.335125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.335157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.335332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.335359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.335579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.335602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.335713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.335741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.335905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.335932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.336116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.943 [2024-07-22 17:00:19.336139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.943 qpair failed and we were unable to recover it. 00:46:59.943 [2024-07-22 17:00:19.336296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.336337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.336484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.336514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.336697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.336719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.336893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.336920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.337975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.337998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.338161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.338189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.338369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.338399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.338504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.338540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.338719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.338755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.338929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.338952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.339134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.339162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.339341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.339368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.339519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.339541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.339713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.339741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.339920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.339947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.340131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.340154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.340294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.340349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.340519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.340547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.340741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.340763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.340993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.341187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.341408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.341603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.341747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.341928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.341956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.342180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.342204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.342400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.342428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.342588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.342611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.342784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.342812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.342946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.342979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.343163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.343187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.343372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.343400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.343562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.343590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.944 qpair failed and we were unable to recover it. 00:46:59.944 [2024-07-22 17:00:19.343700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.944 [2024-07-22 17:00:19.343723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.343901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.343942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.344899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.344922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.345088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.345112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.345277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.345301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.345462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.345499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.345671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.345699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.345889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.345914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.346930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.346960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.347145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.347173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.347363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.347390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.347550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.347581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.347756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.347784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.347975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.348003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.348232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.348255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.348436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.348464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.348664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.348692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.348854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.348877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.349057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.349096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.349248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.349276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.349425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.349448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.349667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.349695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.349930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.349958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.350150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.350174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.350353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.350381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.350617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.350645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.350913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.945 [2024-07-22 17:00:19.350941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.945 qpair failed and we were unable to recover it. 00:46:59.945 [2024-07-22 17:00:19.351198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.351223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.351478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.351506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.351671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.351693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.351832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.351874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.352141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.352319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.352472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.352645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.352843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.352992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.353014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.353242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.353269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.353491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.353514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.353683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.353710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.353917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.353945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.354155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.354187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.354336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.354363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.354519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.354547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.354687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.354724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.354904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.354932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.355085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.355109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.355232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.355271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.355482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.355510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.355692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.355719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.356026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.356050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.356214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.356253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.356386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.356414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.356560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.356598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.356741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.356782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.357004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.357032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.357180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.357203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.357429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.357457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.357667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.357694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.357861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.357884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.946 [2024-07-22 17:00:19.358046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.946 [2024-07-22 17:00:19.358087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.946 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.358233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.358260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.358419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.358442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.358671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.358701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.358870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.358898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.359077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.359100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.359302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.359331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.359457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.359485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.359693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.359719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.359968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.359997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.360141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.360168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.360345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.360367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.360561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.360588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.360766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.360800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.360921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.360945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.361261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.361289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.361502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.361530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.361695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.361718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.361886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.361913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.362078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.362102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.362314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.362336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.362514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.362541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.362715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.362745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.362920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.362943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.363176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.363204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.363376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.363403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.363602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.363625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.363740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.363768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.363940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.363972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.364151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.364176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.364345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.364373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.364554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.364582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.364772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.364796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.364934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.364961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.365187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.365215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.365376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.365402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.365555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.365583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.365733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.365760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.365938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.947 [2024-07-22 17:00:19.365960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.947 qpair failed and we were unable to recover it. 00:46:59.947 [2024-07-22 17:00:19.366113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.366141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.366275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.366302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.366500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.366522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.366705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.366732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.366940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.366973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.367127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.367160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.367290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.367326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.367500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.367527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.367733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.367756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.367952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.367986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.368165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.368189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.368328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.368351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.368567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.368593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.368761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.368788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.368948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.368981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.369143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.369166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.369459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.369486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.369646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.369669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.369806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.369846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.370015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.370043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.370231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.370269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.370423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.370463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.370684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.370712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.370843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.370865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.371026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.371066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.371368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.371396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.371567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.371589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.371840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.371868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.372072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.372234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.372421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.372594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.372809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.372998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.373026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.373226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.373253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.373427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.373449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.373638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.373666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.373882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.948 [2024-07-22 17:00:19.373911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.948 qpair failed and we were unable to recover it. 00:46:59.948 [2024-07-22 17:00:19.374125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.374149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.374309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.374337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.374652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.374680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.374847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.374875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.375052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.375085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.375225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.375262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.375424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.375462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.375666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.375694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.375845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.375878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.376104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.376128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.376292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.376319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.376540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.376567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.376770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.376802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.376915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.376943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.377116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.377138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.377268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.377304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.377490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.377530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.377679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.377707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.377885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.377907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.378126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.378154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.378306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.378333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.378529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.378552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.378727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.378755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.378887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.378914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.379080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.379104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.379325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.379353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.379498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.379537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.379658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.379682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.379850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.379890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.380042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.380069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.380229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.380266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.380481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.380509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.380658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.380686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.380861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.380883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.381091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.381120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.381276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.381303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.381492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.381515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.381705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.381738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.949 qpair failed and we were unable to recover it. 00:46:59.949 [2024-07-22 17:00:19.381886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.949 [2024-07-22 17:00:19.381913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.382076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.382100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.382238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.382276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.382454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.382481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.382643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.382665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.382865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.382892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.383063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.383087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.383313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.383344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.383518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.383546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.383671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.383698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.383849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.383887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.384070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.384098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.384281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.384309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.384471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.384494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.384662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.384690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.384843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.384875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.385917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.385946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.386142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.386166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.386361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.386388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.386600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.386628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.386795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.386817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.387052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.387080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.387259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.387293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.387454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.387477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.387645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.387674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.387919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.387946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.388094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.950 [2024-07-22 17:00:19.388117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.950 qpair failed and we were unable to recover it. 00:46:59.950 [2024-07-22 17:00:19.388319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.388347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.388495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.388527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.388716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.388739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.388917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.388945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.389128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.389152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.389314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.389337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.389447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.389489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.389643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.389670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.389808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.389845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.390044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.390209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.390419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.390633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.390794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.390958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.391012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.391171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.391198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.391354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.391382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.391604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.391632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.391792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.391822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.391995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.392024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.392197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.392219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.392387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.392419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.392563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.392591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.392830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.392853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.393923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.393961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.394151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.394175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.394339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.394367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.394538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.394560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.394726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.394747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.394941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.394980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.395129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.395153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.395267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.395289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.395450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.951 [2024-07-22 17:00:19.395477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.951 qpair failed and we were unable to recover it. 00:46:59.951 [2024-07-22 17:00:19.395720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.395743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.395913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.395941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.396096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.396119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.396354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.396377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.396523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.396550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.396727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.396755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.396945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.396999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.397144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.397181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.397344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.397371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.397536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.397562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.397721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.397743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.397955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.397991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.398141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.398164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.398313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.398369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.398534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.398563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.398723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.398746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.398918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.398950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.399101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.399131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.399303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.399326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.399493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.399521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.399714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.399742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.399915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.399937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.400126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.400168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.400313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.400341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.400476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.400513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.400639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.400661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.400850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.400888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.401072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.401097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.401307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.401335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.401492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.401520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.401714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.401745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.401879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.401906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.402069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.402093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.402213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.402236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.402416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.402444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.402562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.402590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.402734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.402758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.403028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.952 [2024-07-22 17:00:19.403056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.952 qpair failed and we were unable to recover it. 00:46:59.952 [2024-07-22 17:00:19.403206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.403238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.403412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.403434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.403552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.403593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.403770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.403798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.403969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.403992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.404152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.404180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.404364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.404392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.404553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.404576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.404747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.404769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.404920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.404948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.405148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.405172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.405349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.405377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.405589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.405627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.405749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.405772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.405901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.405939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.406118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.406145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.406266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.406290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.406467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.406495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.406637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.406665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.406825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.406863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.407026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.407241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.407414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.407627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.407830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.407990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.408014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.408218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.408246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.408359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.408388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.408576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.408599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.408776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.408808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.409036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.409071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.409218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.409242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.409473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.409501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.409651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.409679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.409864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.409887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.410055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.410083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.410217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.410244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.410433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.410456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.953 qpair failed and we were unable to recover it. 00:46:59.953 [2024-07-22 17:00:19.410609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.953 [2024-07-22 17:00:19.410637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.410832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.410865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.411062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.411087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.411298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.411327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.411476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.411504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.411684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.411707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.411863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.411891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.412037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.412061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.412180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.412204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.412505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.412533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.412655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.412683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.412847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.412884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.413134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.413163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.413458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.413486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.413681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.413704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.413896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.413923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.414114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.414285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.414461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.414639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.414803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.414947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.415189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.415336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.415480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.415739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.415932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.415954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.416113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.416276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.416464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.416640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.416820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.416989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.417138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.417295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.417492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.417638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.417819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.417850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.954 [2024-07-22 17:00:19.418006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.954 [2024-07-22 17:00:19.418031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.954 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.418171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.418215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.418380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.418409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.418564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.418587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.418749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.418772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.418927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.418956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.419124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.419149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.419275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.419301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.419437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.419467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.419634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.419660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.419834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.419863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.420953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.420984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.421154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.421332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.421524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.421674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.421847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.421976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.422163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.422340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.422535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.422683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.422849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.422873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.423073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.423260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.423438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.423624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.955 [2024-07-22 17:00:19.423801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.955 qpair failed and we were unable to recover it. 00:46:59.955 [2024-07-22 17:00:19.423960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.423990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.424158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.424186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.424334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.424366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.424539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.424563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.424713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.424755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.424896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.424928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.425883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.425912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.426929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.426980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.427130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.427269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.427418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.427633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.427797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.427970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.428821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.428991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.429165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.429322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.429501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.429706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.429908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.429938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.430067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.430109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.430220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.430259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.430393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.956 [2024-07-22 17:00:19.430433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.956 qpair failed and we were unable to recover it. 00:46:59.956 [2024-07-22 17:00:19.430547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.430576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.430725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.430751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.430878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.430901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.431929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.431957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.432115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.432298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.432465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.432654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.432825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.432988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.433202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.433382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.433540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.433747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.433933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.433958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.434961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.434991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.435125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.435166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.435315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.435343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.435491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.435516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.435663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.435705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.435845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.435873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.436937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.436985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.437138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.437177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.957 qpair failed and we were unable to recover it. 00:46:59.957 [2024-07-22 17:00:19.437326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.957 [2024-07-22 17:00:19.437368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.437512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.437541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.437715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.437739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.437902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.437930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.438937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.438986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.439127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.439166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.439308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.439332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.439476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.439508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.439655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.439686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.439840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.439878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.440896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.440934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.441112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.441141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.441263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.441292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.441493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.441613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.441656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.441825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.441853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.442887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.442916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.443929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.443959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.958 qpair failed and we were unable to recover it. 00:46:59.958 [2024-07-22 17:00:19.444098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.958 [2024-07-22 17:00:19.444122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.444278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.444320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.444467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.444495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.444677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.444701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.444879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.444907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.445084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.445112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.445261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.445287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.445460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.445487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.445627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.445655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.445839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.445864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.446939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.446984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.447159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.447188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.447311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.447340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.447519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.447542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.447692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.447736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.447881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.447910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.448929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.448957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.449951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.449988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.450152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.959 [2024-07-22 17:00:19.450177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.959 qpair failed and we were unable to recover it. 00:46:59.959 [2024-07-22 17:00:19.450305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.450330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.450507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.450538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.450690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.450716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.450858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.450899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.451950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.451996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.452176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.452372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.452538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.452717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.452884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.452997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.453166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.453358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.453535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.453714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.453881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.453922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.454139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.454307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.454478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.454664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.454845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.454991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.455174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.455369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.455537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.455684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.455866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.455895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.456093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.456224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.456437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.456619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.960 [2024-07-22 17:00:19.456751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.960 qpair failed and we were unable to recover it. 00:46:59.960 [2024-07-22 17:00:19.456914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.456942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.457900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.457928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.458135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.458310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.458520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.458667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.458817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.458983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.459179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.459362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.459544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.459737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.459907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.459931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.460898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.460921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.461080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.461106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.461227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.461266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.461439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.461463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.461644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.461672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.461814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.461837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.462944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.462997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.463176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.463207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.463328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.463357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.463489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.961 [2024-07-22 17:00:19.463528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.961 qpair failed and we were unable to recover it. 00:46:59.961 [2024-07-22 17:00:19.463702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.463731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.463845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.463874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.464918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.464947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.465102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.465273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.465479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.465656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.465831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.465957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.466166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.466327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.466511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.466706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.466886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.466929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.467943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.467973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.468161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.468344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.468483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.468688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.468851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.468984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.469159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.469359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.469539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.469710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.469864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.469907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.470046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.470087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.470211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.962 [2024-07-22 17:00:19.470240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.962 qpair failed and we were unable to recover it. 00:46:59.962 [2024-07-22 17:00:19.470380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.470419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.470568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.470609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.470734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.470762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.470921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.470946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.471116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.471145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.471309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.471337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.471471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.471510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.471680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.471710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.471850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.471879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.472932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.472969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.473129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.473167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.473286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.473311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.473497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.473529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.473710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.473733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.473892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.473925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.474099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.474280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.474502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.474655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.474842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.474976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.475140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.475321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.475525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.475668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.475863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.475887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.476916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.476939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.477078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.477103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.963 qpair failed and we were unable to recover it. 00:46:59.963 [2024-07-22 17:00:19.477225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.963 [2024-07-22 17:00:19.477254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.477405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.477447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.477633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.477664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.477778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.477807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.477975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.478128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.478324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.478516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.478701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.478896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.478924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.479089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.479113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.479268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.479310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.479447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.479476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.479619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.479658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.479803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.479827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.480929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.480958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.481118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.481146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.481295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.481318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.481464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.481504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.481628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.481656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.481854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.481878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.482088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.482118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.482234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.482263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.482401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.482442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.482606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.482646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.482801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.482834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.483048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.483075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.483270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.964 [2024-07-22 17:00:19.483299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.964 qpair failed and we were unable to recover it. 00:46:59.964 [2024-07-22 17:00:19.483441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.483470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.483634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.483659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.483860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.483890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.484021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.484050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.484258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.484286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.484467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.484497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.484629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.484659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.484831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.484859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.485045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.485070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.485205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.485230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.485470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.485495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.485673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.485702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.485842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.485871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.486054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.486079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.486254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.486283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.486404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.486432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.486561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.486585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.486773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.486817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.487036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.487066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.487191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.487214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.487415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.487444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.487568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.487597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.487794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.487824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.488034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.488063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.488192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.488220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.488360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.488398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.488526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.488550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.488747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.488776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.489022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.489047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.489210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.489239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.489393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.489423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.489647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.489672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.489899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.489928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.490120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.490146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.490326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.490350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.490541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.490570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.490757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.490790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.965 [2024-07-22 17:00:19.490980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.965 [2024-07-22 17:00:19.491031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.965 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.491175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.491204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.491385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.491414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.491598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.491623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.491806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.491845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.492039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.492069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.492202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.492227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.492421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.492445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.492657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.492687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.492907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.492933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.493123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.493153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.493298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.493326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.493542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.493568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.493776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.493804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.494002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.494031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.494181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.494220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.494385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.494415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.494631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.494662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.494854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.494878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.495100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.495143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.495310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.495339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.495527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.495551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.495693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.495716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.495903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.495937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.496089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.496113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.496259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.496300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.496518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.496548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.496743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.496768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.496982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.497024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.497204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.497229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.497415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.497439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.497593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.497621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.497811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.497839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.498020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.498046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.498197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.498227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.498413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.498443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.498671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.498709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.498889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.498918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.966 [2024-07-22 17:00:19.499074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.966 [2024-07-22 17:00:19.499113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.966 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.499332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.499356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.499590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.499619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.499731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.499760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.499937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.499984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.500228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.500257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.500436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.500466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.500627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.500651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.500844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.500874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.501083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.501114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.501323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.501348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.501555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.501584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.501720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.501749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.502004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.502028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.502202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.502230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.502377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.502409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.502626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.502652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.502842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.502872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.503059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.503090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.503314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.503340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.503535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.503564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.503718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.503746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.503923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.503946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.504123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.504152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.504315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.504344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.504573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.504596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.504777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.504805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.505030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.505057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.505243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.505282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.505473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.505503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.505674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.505703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.505930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.505953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.506126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.506156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.506380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.506421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.506648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.506672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.506820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.506849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.507027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.507057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.507266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.507290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.507474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.967 [2024-07-22 17:00:19.507504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.967 qpair failed and we were unable to recover it. 00:46:59.967 [2024-07-22 17:00:19.507722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.507754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.507948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.507995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.508184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.508213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.508406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.508439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.508560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.508583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.508797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.508827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.509059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.509089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.509277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.509301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.509490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.509520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.509710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.509738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.509872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.509910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.510096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.510127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.510304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.510334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.510529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.510554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.510710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.510739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.510913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.510942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.511087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.511112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.511319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.511349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.511541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.511572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.511770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.511794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.511958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.511996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.512210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.512234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.512439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.512464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.512677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.512706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.512888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.512916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.513121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.513146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.513334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.513364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.513551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.513581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.513749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.513773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.513951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.513988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.514175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.514205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.514380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.514404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.514572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.514601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.514825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.514864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.515013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.515050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.515238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.515267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.515442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.515471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.515654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.968 [2024-07-22 17:00:19.515685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.968 qpair failed and we were unable to recover it. 00:46:59.968 [2024-07-22 17:00:19.515845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.515874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.516111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.516141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.516323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.516346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.516534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.516572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.516793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.516822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.517011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.517036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.517202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.517244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.517449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.517477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.517700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.517723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.517939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.517981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.518216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.518240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.518481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.518504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.518654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.518683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.518931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.518961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.519203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.519229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.519465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.519494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.519683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.519712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.520000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.520024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.520277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.520306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.520507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.520536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.520723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.520746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.520962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.520999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.521208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.521238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.521413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.521436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.521650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.521679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.521917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.521945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.522225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.522250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.522439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.522469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.522670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.522699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.522894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.522916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.523136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.523165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.523318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.969 [2024-07-22 17:00:19.523347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.969 qpair failed and we were unable to recover it. 00:46:59.969 [2024-07-22 17:00:19.523494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.523517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.523722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.523756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.523992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.524022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.524198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.524222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.524429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.524457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.524683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.524711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.524907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.524935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.525092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.525118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.525339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.525368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.525593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.525617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.525795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.525825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.526016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.526041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.526238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.526261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.526466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.526495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.526649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.526679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.526898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.526922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.527207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.527435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.527465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.527669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.527692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.527935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.527973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.528178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.528207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.528363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.528395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.528632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.528662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.528876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.528904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.529055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.529090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.529308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.529337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.529550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.529579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.529743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.529766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.529988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.530022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.530297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.530486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.530510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.530678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.530707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.530933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.530961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.531176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.531200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.531386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.531418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.531584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.531613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.531804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.531831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.532032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.532061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.970 qpair failed and we were unable to recover it. 00:46:59.970 [2024-07-22 17:00:19.532292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.970 [2024-07-22 17:00:19.532321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.532517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.532542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.532697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.532725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.532939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.532975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.533116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.533148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.533371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.533400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.533611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.533640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.533861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.533884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.534106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.534136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.534360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.534389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.534548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.534581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.534819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.534848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.535049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.535072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.535260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.535283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.535535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.535564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.535787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.535829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.536005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.536028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.536239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.536273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.536466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.536495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.536717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.536894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.536930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.537174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.537199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.537344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.537367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.537581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.537610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.537880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.537910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.538088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.538113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.538280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.538309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.538543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.538572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.538802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.538830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.538984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.539013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.539133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.539161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.539324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.539362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.539589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.539617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.539774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.539801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.540061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.540086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.540337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.540366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.540563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.540592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.540754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.540776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.971 [2024-07-22 17:00:19.540983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.971 [2024-07-22 17:00:19.541029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.971 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.541215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.541242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.541383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.541420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.541567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.541610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.541849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.541878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.542099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.542124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.542378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.542407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.542619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.542648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.542866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.542895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.543119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.543144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.543344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.543373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.543563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.543586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.543742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.543770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.543982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.544012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.544214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.544253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.544457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.544485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.544642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.544671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.544800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.544843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.544991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.545042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.545195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.545229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.545479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.545503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.545656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.545686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.545908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.545937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.546127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.546151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.546407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.546437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.546671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.546700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.546894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.546917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.547137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.547166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.547401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.547430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.547659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.547681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.547887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.547916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.548105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.548130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.548357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.548381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.548599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.548627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.548852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.548881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.549117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.549142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.549344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.549373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.549533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.549563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.972 [2024-07-22 17:00:19.549785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.972 [2024-07-22 17:00:19.549809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.972 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.550027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.550057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.550293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.550323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.550525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.550549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.550709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.550738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.550978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.551008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.551193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.551217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.551436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.551464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.551655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.551684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.551899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.551926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.552101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.552131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.552321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.552350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.552530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.552553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.552778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.552808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.553041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.553072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.553284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.553308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.553557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.553586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.553787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.553816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.554051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.554074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.554240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.554290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.554533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.554562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.554771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.554794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.555030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.555054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.555276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.555306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.555500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.555524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.555690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.555731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.555925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.555954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.556157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.556183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.556424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.556455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.556653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.556852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.556893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.557071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.557102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.557309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.973 [2024-07-22 17:00:19.557339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.973 qpair failed and we were unable to recover it. 00:46:59.973 [2024-07-22 17:00:19.557518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.557541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.557699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.557722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.557916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.557945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.558184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.558216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.558426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.558456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.558693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.558723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.558976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.559009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.559256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.559286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.559487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.559516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.559722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.559762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:46:59.974 [2024-07-22 17:00:19.559993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:46:59.974 [2024-07-22 17:00:19.560023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:46:59.974 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.560259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.560289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.560516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.560552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.560760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.560799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.560930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.560976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.561216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.561256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.561467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.561496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.561748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.561779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.562031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.562060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.562267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.562297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.562462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.562492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.562684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.562708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.562929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.562958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.563242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.563269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.563528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.563554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.563737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.563767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.563979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.564009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.564222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.564262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.564462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.564491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.564688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.564718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.564937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.564961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.565216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.565247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.565437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.565467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.565674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.565697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.565866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.565894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.566154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.566194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.566362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.566384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.566611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.566640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.566848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.566876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.567052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.567078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.567263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.567292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.567443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.567472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.567682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.567705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.567908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.567938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.568180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.568210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.568444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.568468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.568639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.568669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.568850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.568879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.569035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.569073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.569255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.569283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.569463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.569500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.569748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.569773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.569992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.570032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.570235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.570279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.570519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.570543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.570717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.241 [2024-07-22 17:00:19.570746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.241 qpair failed and we were unable to recover it. 00:47:00.241 [2024-07-22 17:00:19.570955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.571003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.571191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.571216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.571377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.571419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.571652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.571682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.571925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.571969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.572218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.572247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.572368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.572397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.572633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.572657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.572831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.572860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.573045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.573075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.573323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.573347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.573547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.573576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.573769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.573798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.573955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.573990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.574151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.574180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.574354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.574394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.574616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.574639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.574801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.574834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.575056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.575087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.575259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.575282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.575491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.575520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.575714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.575743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.575919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.575948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.576203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.576228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.576486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.576515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.576750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.576774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.576973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.577001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.577153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.577187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.577409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.577433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.577646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.577675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.577895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.577923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.578091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.578116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.578334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.578363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.578564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.578593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.578857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.578881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.579080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.579110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.579320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.579348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.579546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.579567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.579783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.579810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.579986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.580013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.580220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.580257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.580406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.580433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.580658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.580692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.580855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.580876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.581104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.581131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.581342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.581369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.581564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.581585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.581827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.581855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.582037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.582065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.582290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.582314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.582485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.582512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.582727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.582754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.582981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.583009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.583208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.583235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.583419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.583447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.583680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.583717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.242 [2024-07-22 17:00:19.583916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.242 [2024-07-22 17:00:19.583944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.242 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.584190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.584216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.584425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.584464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.584676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.584705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.584856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.584894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.585137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.585163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.585367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.585399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.585614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.585644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.585880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.585905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.586081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.586110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.586265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.586294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.586427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.586467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.586718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.586747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.586978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.587022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.587236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.587274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.587570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.587726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.587755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.587969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.588007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.588209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.588238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.588462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.588493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.588693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.588716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.588916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.588945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.589174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.589204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.589350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.589373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.589649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.589679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.589902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.589932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.590137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.590164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.590290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.590315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.590471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.590500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.590693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.590717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.590952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.591007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.591217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.591258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.591526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.591551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.591811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.591864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.592078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.592108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.592277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.592311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.592461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.592489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.592729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.592760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.592949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.592981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.593180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.593205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.593372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.593402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.593653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.593683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.593888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.593920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.594117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.594162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.594325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.594354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.594518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.594542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.594709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.594733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.594921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.594951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.595148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.595173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.595350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.595378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.595572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.595604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.595776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.595800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.595978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.596008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.596204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.596234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.596381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.596405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.596584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.596614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.596770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.243 [2024-07-22 17:00:19.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.243 qpair failed and we were unable to recover it. 00:47:00.243 [2024-07-22 17:00:19.596961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.596993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.597138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.597164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.597353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.597382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.598193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.598230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.598454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.598485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.598639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.598669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.598886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.598917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.599087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.599114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.599595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.599621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.599814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.599842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.599991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.600019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.600187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.600213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.600379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.244 [2024-07-22 17:00:19.600404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.244 qpair failed and we were unable to recover it. 00:47:00.244 [2024-07-22 17:00:19.600598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.600632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.600848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.600873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.601041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.601077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.601215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.601240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.601459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.601483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.601710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.601740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.601972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.602016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.602152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.602178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.602344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.602373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.602571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.602600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.602790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.602819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.602977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.603025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.603155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.603181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.603365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.603404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.603593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.603637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.603809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.603865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.604054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.604081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.604205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.604232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.604378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.604423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.604615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.604668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.604915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.604974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.605147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.605173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.605351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.605391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.605612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.605659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.605814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.605843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.606027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.606053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.606233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.606272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.606504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.606534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.606757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.606787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.606972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.606996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.607135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.607162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.607334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.607365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.607569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.607631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.607861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.607890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.608073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.608100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.608261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.608285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.608444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.608491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.608701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.608729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.608889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.608921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.609135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.609162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.609356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.609385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.245 qpair failed and we were unable to recover it. 00:47:00.245 [2024-07-22 17:00:19.609649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.245 [2024-07-22 17:00:19.609700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.609928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.609957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.610156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.610182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.610397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.610435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.610655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.610699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.610907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.610951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.611130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.611156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.611291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.611333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.611523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.611565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.611773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.611832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.612016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.612042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.612190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.612234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.612440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.612503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.612752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.612802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.613010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.613048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.613267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.613296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.613458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.613487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.613657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.613686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.613885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.613910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.614079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.614109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.614299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.614340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.614562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.614613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.614846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.614886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.615044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.615071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.615227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.615273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.615403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.615434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.615613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.615645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.615842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.615891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.616031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.616060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.616178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.616204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.616390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.616418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.616620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.616650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.616851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.616880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.617074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.617115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.617276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.617307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.617503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.617546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.617745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.617792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.617976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.618003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.618131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.618158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.618296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.618339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.618516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.618558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.618817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.618860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.619076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.619103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.619231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.619278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.619493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.619548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.619692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.619751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.619932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.619979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.620174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.620218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.620401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.620443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.620590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.620641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.620811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.620836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.620984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.621012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.621168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.621212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.621406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.621449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.621618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.621667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.621831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.621856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.622019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.622048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.622179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.622222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.622401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.622444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.246 [2024-07-22 17:00:19.622578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.246 [2024-07-22 17:00:19.622616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.246 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.622786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.622834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.623033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.623064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.623229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.623255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.623418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.623462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.623685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.623714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.623915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.623940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.624096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.624122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.624291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.624316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.624518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.624561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.624751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.624777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.625012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.625038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.625196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.625243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.625423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.625466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.625630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.625672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.625907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.625947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.626155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.626200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.626366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.626436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.626606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.626649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.626891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.626916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.627086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.627116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.627332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.627374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.627570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.627600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.627832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.627856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.628129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.628174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.628403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.628446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.628598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.628647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.628882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.628905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.629084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.629113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.629381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.629425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.629595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.629636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.629849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.629873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.630041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.630067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.630266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.630309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.630534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.630576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.630777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.630825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.630980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.631026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.631178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.631227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.631490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.631532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.631708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.631750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.631977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.632014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.632173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.632198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.632405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.632448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.632649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.632699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.632987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.633017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.633226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.633272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.633471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.633514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.633681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.633731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.633976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.634014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.634150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.634175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.634363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.634405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.634667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.634709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.634987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.635024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.635232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.635270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.635467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.635508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.635719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.635763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.635972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.635997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.636171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.636195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.636336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.636376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.636621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.636671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.636905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.636929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.637114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.247 [2024-07-22 17:00:19.637140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.247 qpair failed and we were unable to recover it. 00:47:00.247 [2024-07-22 17:00:19.637318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.637361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.637531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.637584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.637771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.637814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.638018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.638042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.638201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.638242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.638426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.638456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.638689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.638732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.638995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.639019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.639222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.639274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.639509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.639551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.639754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.639797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.639996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.640020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.640264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.640288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.640543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.640585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.640785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.640828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.641031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.641055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.641267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.641309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.641496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.641539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.641776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.641819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.642000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.642041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.642200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.642245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.642437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.642485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.642718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.642760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.642925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.642951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.643162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.643203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.643397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.643438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.643696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.643738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.643912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.643935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.644153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.644196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.644408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.644450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.644705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.644842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.644866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.645057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.645082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.645327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.645376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.645558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.645601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.645795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.645827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.646064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.646107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.646339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.646381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.646576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.646617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.646813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.646836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.647007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.647046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.647235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.647277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.647487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.647531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.647709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.647751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.647978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.648003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.648189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.648232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.648480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.648522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.648723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.648774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.648974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.648998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.649236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.649274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.649493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.649536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.649726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.649768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.649987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.650012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.650286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.650328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.650541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.650585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.650848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.248 [2024-07-22 17:00:19.650891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.248 qpair failed and we were unable to recover it. 00:47:00.248 [2024-07-22 17:00:19.651121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.651145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.651319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.651360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.651569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.651611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.651805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.651848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.652099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.652124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.652274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.652316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.652511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.652561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.652783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.652829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.653144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.653343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.653385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.653662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.653704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.653994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.654018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.654218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.654259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.654467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.654508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.654697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.654739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.654940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.654976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.655203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.655227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.655486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.655529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.655734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.655777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.656035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.656078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.656270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.656312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.656526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.656567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.656758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.656800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.657029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.657053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.657236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.657283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.657504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.657546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.657781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.657824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.658028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.658053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.658259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.658301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.658547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.658577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.658806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.658836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.659081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.659124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.659337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.659380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.659586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.659629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.659870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.659894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.660098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.660123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.660320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.660363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.660567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.660609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.660831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.660854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.661058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.661102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.661277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.661320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.661547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.661589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.661792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.661816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.662034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.662078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.662281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.662323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.662525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.662568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.662802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.662826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.663104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.663149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.663359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.663401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.663636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.663679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.663894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.663916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.664131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.664172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.664408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.664449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.664620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.664664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.664845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.664867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.665109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.665153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.665435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.665478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.665747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.665790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.666060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.666087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.666311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.666359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.666575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.666618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.666845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.666888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.667168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.667194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.667465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.667511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.249 [2024-07-22 17:00:19.667756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.249 [2024-07-22 17:00:19.667798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.249 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.668050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.668090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.668328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.668371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.668661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.668704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.668940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.668987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.669232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.669256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.669488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.669531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.669740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.669770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.670044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.670070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.670306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.670330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.670571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.670615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.670858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.670909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.671138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.671165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.671437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.671467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.671684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.671726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.671892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.671916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.672190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.672216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.672414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.672465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.672707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.672751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.672978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.673030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.673261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.673306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.673521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.673564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.673734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.673776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.673951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.674009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.674258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.674284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.674497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.674548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.674777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.674820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.675065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.675096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.675378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.675420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.675629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.675672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.675853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.675878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.676136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.676161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.676421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.676464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.676677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.676720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.676927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.676970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.677197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.677222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.677425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.677469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.677708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.677751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.677989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.678014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.678255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.678280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.678479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.678521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.678653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.678703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.678871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.678896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.679089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.679114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.679316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.679358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.679529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.679573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.679776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.679817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.679986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.680028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.680230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.680274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.680428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.680467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.680636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.680679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.680895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.680918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.681126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.681169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.681340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.681383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.681606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.681650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.681840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.681865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.682109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.682152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.682355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.682397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.682617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.682660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.682899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.682925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.683145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.250 [2024-07-22 17:00:19.683190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.250 qpair failed and we were unable to recover it. 00:47:00.250 [2024-07-22 17:00:19.683403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.683445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.683679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.683722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.683892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.683932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.684201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.684249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.684453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.684496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.684708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.684749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.684976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.685002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.685220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.685264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.685498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.685541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.685707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.685762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.685993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.686019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.686237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.686262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.686465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.686509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.686693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.686736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.686886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.686910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.687081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.687106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.687272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.687323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.687568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.687611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.687816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.687855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.688076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.688120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.688326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.688356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.688546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.688576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.688827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.688852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.689054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.689095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.689304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.689348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.689557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.689600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.689839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.689864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.690125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.690171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.690400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.690443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.690670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.690714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.690950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.690992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.691228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.691267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.691492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.691521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.691762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.691806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.692026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.692053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.692311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.692355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.692605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.692648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.692885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.692928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.693160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.693186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.693373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.693415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.693663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.693705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.693914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.693939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.694159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.694189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.694397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.694439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.694690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.694733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.694978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.695018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.695264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.695288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.695497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.695540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.695758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.695802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.696069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.696095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.696339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.696363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.696590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.696634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.696834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.696876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.697092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.697133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.697350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.697392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.697607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.697650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.697870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.697894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.698166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.698192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.698406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.698449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.698634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.698677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.698833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.698872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.699020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.699050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.699275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.699318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.699533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.699562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.699759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.699801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.700022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.251 [2024-07-22 17:00:19.700048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.251 qpair failed and we were unable to recover it. 00:47:00.251 [2024-07-22 17:00:19.700305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.700346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.700584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.700627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.700831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.700873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.701099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.701124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.701359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.701403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.701651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.701694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.701932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.701956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.702163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.702189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.702400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.702442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.702661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.702704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.702915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.702940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.703199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.703224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.703449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.703492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.703731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.703773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.703999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.704025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.704260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.704284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.704445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.704491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.704745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.704788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.704985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.705011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.705160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.705185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.705404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.705448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.705667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.705710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.705923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.705962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.706157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.706196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.706430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.706473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.706685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.706728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.706940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.706992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.707246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.707285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.707520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.707563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.707843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.707886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.708104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.708130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.708320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.708363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.708657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.708699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.708921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.708960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.709173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.709198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.709422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.709465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.709707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.709750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.709912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.709935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.710200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.710225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.710447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.710488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.710700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.710741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.710896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.710920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.711149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.711174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.711368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.711411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.711678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.711721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.711927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.711952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.712186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.712210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.712433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.712475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.712721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.712763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.713004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.713029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.713244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.713268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.713524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.713566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.713764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.713807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.714088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.714113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.714330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.714373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.714577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.714620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.714842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.714869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.715127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.715151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.715286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.715315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.715495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.715538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.715737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.715780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.715996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.716021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.716230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.716272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.716443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.716486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.716634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.716677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.716878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.252 [2024-07-22 17:00:19.716902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.252 qpair failed and we were unable to recover it. 00:47:00.252 [2024-07-22 17:00:19.717153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.717197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.717424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.717467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.717678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.717720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.717988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.718020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.718278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.718321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.718525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.718566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.718764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.718807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.719028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.719053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.719306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.719348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.719564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.719606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.719808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.719832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.720087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.720132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.720386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.720430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.720637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.720680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.720927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.720951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.721217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.721242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.721498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.721541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.721772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.721813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.722044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.722069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.722309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.722352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.722593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.722636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.722895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.722920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.723200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.723226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.723502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.723545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.723808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.723851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.724119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.724144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.724412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.724456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.724652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.724694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.724915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.724940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.725195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.725220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.725470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.725513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.725742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.725785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.726009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.726045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.726339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.726383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.726614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.726658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.726933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.726986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.727260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.727299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.727524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.727566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.727820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.727862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.728090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.728115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.728340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.728383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.728595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.728639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.728937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.728985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.729217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.729242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.729518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.729562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.729818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.729860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.730083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.730108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.730296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.730340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.730554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.730595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.730815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.730857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.731119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.731145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.731404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.731452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.253 qpair failed and we were unable to recover it. 00:47:00.253 [2024-07-22 17:00:19.731693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.253 [2024-07-22 17:00:19.731736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.731978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.732002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.732287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.732333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.732580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.732623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.732888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.732940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.733230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.733259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.733525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.733569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.733808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.733850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.734087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.734113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.734332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.734376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.734565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.734605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.734811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.734853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.735128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.735173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.735440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.735483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.735707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.735750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.736007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.736032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.736309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.736352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.736597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.736640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.736851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.736894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.737127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.737152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.737273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.737302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.737488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.737530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.737792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.737834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.738053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.738098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.738325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.738368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.738608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.738650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.738844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.738867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.739064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.739109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.739322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.739364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.739542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.739585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.739826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.739869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.740107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.740150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.740349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.740389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.740607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.740651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.740848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.740872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.741071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.741100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.741292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.741334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.741575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.741619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.741869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.741893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.742042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.742086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.742312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.742355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.742631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.742675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.742949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.742986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.743275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.743300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.743568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.743611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.743816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.743861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.744089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.744115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.744357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.744400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.744616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.744658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.744872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.744896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.745147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.745174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.745413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.745455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.745657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.745700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.745912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.745936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.746184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.746210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.746453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.746496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.746711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.746753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.746985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.747010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.747216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.747256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.747464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.747507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.747766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.747809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.748024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.748049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.748286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.748310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.748550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.748593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.748809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.748851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.254 [2024-07-22 17:00:19.749098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.254 [2024-07-22 17:00:19.749123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.254 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.749367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.749410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.749616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.749660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.749915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.749939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.750177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.750202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.750388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.750431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.750648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.750691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.750911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.750935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.751174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.751199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.751435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.751478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.751697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.751740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.751991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.752017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.752216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.752240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.752470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.752513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.752764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.752807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.753008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.753034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.753196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.753219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.753448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.753491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.753744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.753786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.754038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.754062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.754286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.754314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.754561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.754603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.754853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.754895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.755120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.755145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.755379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.755421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.755651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.755694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.755880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.755903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.756160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.756186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.756443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.756488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.756746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.756788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.757052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.757076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.757326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.757367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.757620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.757661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.757876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.757919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.758149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.758174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.758403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.758445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.758672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.758716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.758930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.758954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.759249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.759275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.759447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.759488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.759674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.759717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.759979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.760004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.760274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.760298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.760530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.760572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.760825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.760867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.761126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.761151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.761398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.761441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.761682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.761725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.761982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.762007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.762203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.762225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.762417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.762458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.762713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.762756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.762995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.763036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.763285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.763309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.763572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.763615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.763840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.763882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.764096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.764122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.764355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.764398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.764664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.764705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.764981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.765006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.765268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.765296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.765546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.765589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.765874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.765917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.766140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.766164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.766379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.766421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.766613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.766656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.766902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.766927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.767226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.767252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.255 qpair failed and we were unable to recover it. 00:47:00.255 [2024-07-22 17:00:19.767532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.255 [2024-07-22 17:00:19.767575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.767802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.767845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.768086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.768111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.768328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.768371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.768562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.768604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.768826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.768868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.769079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.769104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.769336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.769378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.769605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.769647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.769908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.769933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.770150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.770176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.770372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.770414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.770666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.770708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.770927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.770982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.771222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.771261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.771479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.771522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.771768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.771810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.772037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.772062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.772331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.772374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.772613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.772655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.772860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.772884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.773104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.773130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.773403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.773446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.773685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.773729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.773981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.774007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.774239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.774281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.774525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.774568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.774782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.774825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.775083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.775109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.775322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.775366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.775604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.775646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.775860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.775884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.776147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.776175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.776402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.776446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.776661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.776704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.776989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.777025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.777240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.777283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.777518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.777561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.777810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.777853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.778080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.778105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.778351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.778394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.778657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.778700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.778910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.778942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.779210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.779237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.779449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.779493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.779716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.779759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.780030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.780056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.780291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.780335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.780548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.780590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.780869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.780918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.781152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.781178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.781453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.781495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.781704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.781745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.782003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.782028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.782306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.782330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.782537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.782580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.782746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.782788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.783044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.256 [2024-07-22 17:00:19.783069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.256 qpair failed and we were unable to recover it. 00:47:00.256 [2024-07-22 17:00:19.783340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.783383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.783657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.783704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.783921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.783943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.784182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.784207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.784438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.784482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.784699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.784740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.784979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.785005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.785184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.785209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.785442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.785485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.785740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.785783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.786007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.786032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.786289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.786314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.786542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.786582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.786870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.786913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.787188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.787216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.787497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.787541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.787807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.787850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.788036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.788060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.788302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.788346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.788597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.788640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.788884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.788934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.789214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.789240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.789514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.789560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.789765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.789806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.790020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.790045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.790217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.790256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.790466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.790509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.790731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.790775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.790994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.791029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.791337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.791375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.791620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.791663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.791863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.791887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.792173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.792199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.792424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.792466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.792728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.792771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.792988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.793013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.793261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.793286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.793531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.793573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.793817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.793860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.794079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.794106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.794341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.794384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.794655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.794700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.794939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.794983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.795187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.795210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.795489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.795532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.795753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.795793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.796043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.796068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.796351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.796375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.796649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.796692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.796897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.796921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.797177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.797202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.797461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.797503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.797744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.797788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.798058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.798084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.798360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.798388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.798678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.798721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.798900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.798923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.799218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.799244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.799478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.799522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.799784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.799827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.800058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.800084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.800295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.800339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.800546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.800588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.800835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.800887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.801165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.801191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.801477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.257 [2024-07-22 17:00:19.801520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.257 qpair failed and we were unable to recover it. 00:47:00.257 [2024-07-22 17:00:19.801800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.801843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.802134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.802160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.802410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.802453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.802685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.802728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.802991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.803016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.803244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.803287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.803514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.803557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.803826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.803868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.804128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.804154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.804397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.804440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.804712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.804754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.804995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.805020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.805270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.805295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.805537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.805580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.805826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.805868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.806115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.806141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.806357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.806401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.806657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.806700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.806957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.807010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.807242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.807267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.807545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.807588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.807846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.807888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.808149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.808174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.808395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.808437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.808719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.808761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.809025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.809050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.809325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.809349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.809542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.809585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.809831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.809877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.810165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.810192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.810424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.810467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.810735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.810784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.811053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.811079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.811293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.811335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.811566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.811609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.811888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.811938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.812184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.812211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.812485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.812531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.812756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.812799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.813020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.813045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.813281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.813304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.813532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.813574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.813861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.813905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.814159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.814185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.814462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.814506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.814770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.814814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.815091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.815117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.815373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.815416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.815609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.815651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.815861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.815903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.816123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.816148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.816374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.816417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.816662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.816705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.816930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.816976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.817216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.817241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.817465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.817508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.817808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.817851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.818037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.818062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.818337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.818367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.818605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.258 [2024-07-22 17:00:19.818647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.258 qpair failed and we were unable to recover it. 00:47:00.258 [2024-07-22 17:00:19.818846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.818888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.819067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.819091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.819364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.819407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.819636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.819679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.819930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.819954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.820204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.820229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.820504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.820547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.820771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.820814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.821083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.821112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.821349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.821392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.821663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.821706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.821983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.822009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.822225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.822266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.822474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.822517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.822728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.822771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.822997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.823023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.823308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.823332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.823568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.823610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.823878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.823920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.824191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.824217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.824507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.824550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.824776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.824818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.825060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.825083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.825296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.825339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.825614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.825657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.825921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.825946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.826226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.826252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.826495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.826537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.826773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.826816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.827042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.827068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.827354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.827379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.827613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.827656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.827925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.827976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.828162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.828188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.828411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.828455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.828693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.828736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.828976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.829002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.829266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.829290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.829564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.829607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.829856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.829908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.830160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.830186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.830395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.830438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.830703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.830745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.831021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.831047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.831248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.831271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.831523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.831566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.831798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.831841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.832063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.832089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.832328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.832375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.832653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.832683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.832928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.832952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.833162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.833188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.833376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.833416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.833689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.833731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.833960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.834006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.834283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.834321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.834568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.834610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.834889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.834933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.835185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.835211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.835492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.835760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.835803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.836022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.836046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.836295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.836320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.836608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.836651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.836914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.836956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.837208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.837234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.259 [2024-07-22 17:00:19.837499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.259 [2024-07-22 17:00:19.837541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.259 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.837810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.837852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.838124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.838150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.838350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.838392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.838649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.838692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.838957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.839002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.839286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.839325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.839549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.839593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.839856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.839898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.840132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.840157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.840354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.840397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.840680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.840722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.840982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.841007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.841214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.841253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.841534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.841577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.841809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.841852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.842075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.842101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.842313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.842355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.842593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.842636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.842874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.842898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.843144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.843170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.843343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.843385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.843618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.843664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.843893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.843917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.844201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.844244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.844537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.844579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.844850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.844892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.845159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.845186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.845512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.845779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.845808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.846064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.846090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.846305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.846346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.846605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.846647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.846912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.846956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.847216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.847240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.847534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.847577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.847871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.847914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.848157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.848181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.848463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.848506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.848772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.848815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.849049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.849075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.849353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.849378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.849639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.849683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.849943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.849992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.850288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.850313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.850552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.850595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.850834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.850878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.851103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.851130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.851366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.851409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.851642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.851684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.851884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.851909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.852184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.852210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.852511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.852554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.852791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.852840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.853082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.853109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.853326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.853370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.853607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.853650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.853872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.853897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.854148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.854173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.854461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.854504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.854737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.854779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.854974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.854999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.855219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.855261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.855502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.855546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.855843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.855892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.856165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.856191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.856456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.856498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.856776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.856818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.857036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.857062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.857275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.857318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.857591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.857634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.857860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.260 [2024-07-22 17:00:19.857903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.260 qpair failed and we were unable to recover it. 00:47:00.260 [2024-07-22 17:00:19.858153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.858179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.858442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.858485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.858747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.858789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.859017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.859042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.859287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.859311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.859587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.859630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.859908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.859958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.860258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.860283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.860529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.860572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.860855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.860897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.861133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.861160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.861437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.861467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.861699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.861740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.861960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.861991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.862238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.862277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.862515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.862558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.862833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.862877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.863116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.863143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.863398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.863441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.863673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.863717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.863946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.863990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.864253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.864291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.864534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.864577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.864847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.864890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.865129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.865155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.865443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.865486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.865748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.865791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.866018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.866042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.866286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.866310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.866589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.866632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.866915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.866971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.867255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.867279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.867547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.867590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.867862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.867905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.868175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.868200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.868478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.868521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.868722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.868763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.868990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.869015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.869275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.869300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.869536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.869579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.869865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.869908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.870161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.870186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.870465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.870508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.870779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.870822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.871157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.871183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.871451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.871494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.871766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.871815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.872039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.872065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.872313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.872337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.872611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.872655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.872900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.872945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.873232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.873257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.873534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.873576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.873782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.873825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.874103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.874129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.874414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.874457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.874671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.874714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.874978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.875008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.875280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.875304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.875591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.875633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.875899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.875942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.876202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.876228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.876471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.876514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.876771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.261 [2024-07-22 17:00:19.876814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.261 qpair failed and we were unable to recover it. 00:47:00.261 [2024-07-22 17:00:19.877077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.877103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.877398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.877425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.877711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.877754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.878051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.878077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.878342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.878382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.878667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.878713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.878955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.879008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.879248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.879290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.879519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.879549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.262 [2024-07-22 17:00:19.879841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.262 [2024-07-22 17:00:19.879884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.262 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.880190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.880217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.880476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.880520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.880809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.880852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.881125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.881152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.881388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.881433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.881699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.881742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.881961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.881996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.882278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.882306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.882583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.544 [2024-07-22 17:00:19.882626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.544 qpair failed and we were unable to recover it. 00:47:00.544 [2024-07-22 17:00:19.882853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.882896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.883172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.883198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.883492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.883535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.883815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.883858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.884095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.884122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.884398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.884441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.884689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.884732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.884950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.884997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.885220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.885245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.885525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.885569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.885831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.885874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.886115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.886141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.886344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.886387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.886644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.886687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.886957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.887004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.887230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.887268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.887531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.887572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.887777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.887821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.888057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.888083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.888341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.888384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.888644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.888685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.888944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.888989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.889270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.889295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.889586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.889633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.889841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.889892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.890144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.890170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.890433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.890479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.890726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.890769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.891061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.891086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.891314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.891339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.891620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.891664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.891928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.891979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.892278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.892302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.892546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.892590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.892860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.892903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.893183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.893208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.893471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.893515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.545 [2024-07-22 17:00:19.893776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.545 [2024-07-22 17:00:19.893820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.545 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.894085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.894110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.894403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.894449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.894707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.894751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.895017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.895042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.895268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.895293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.895534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.895577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.895849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.895893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.896116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.896142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.896376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.896419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.896680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.896724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.896957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.896990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.897289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.897314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.897597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.897643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.897899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.897952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.898208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.898233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.898514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.898557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.898782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.898828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.899112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.899138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.899331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.899375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.899661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.899704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.899970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.900009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.900248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.900285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.900543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.900573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.900847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.900891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.901154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.901179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.901423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.901466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.901734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.901777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.902040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.902065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.902348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.546 [2024-07-22 17:00:19.902373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.546 qpair failed and we were unable to recover it. 00:47:00.546 [2024-07-22 17:00:19.902618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.902661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.902935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.902991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.903236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.903276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.903541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.903584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.903859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.903901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.904175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.904201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.904455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.904499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.904784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.904826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.905058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.905083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.905369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.905416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.905642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.905685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.905959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.906005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.906239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.906279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.906510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.906553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.906843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.906886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.907150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.907177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.907389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.907432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.907668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.907710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.908002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.908027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.908257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.908282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.908537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.908579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.908838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.908881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.909107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.909132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.909362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.909405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.909664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.909707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.909975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.910000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.910250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.910275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.910521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.910568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.910832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.910874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.911107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.911133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.911374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.911418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.911645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.911688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.911884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.911907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.912186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.912211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.912487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.912531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.912796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.912826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.913064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.913092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.547 qpair failed and we were unable to recover it. 00:47:00.547 [2024-07-22 17:00:19.913307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.547 [2024-07-22 17:00:19.913337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.913593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.913637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.913916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.913940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.914181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.914207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.914496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.914542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.914825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.914855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.915139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.915163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.915439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.915482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.915740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.915782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.916006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.916031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.916300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.916326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.916554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.916596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.916872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.916915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.917122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.917149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.917393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.917436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.917697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.917741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.917986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.918014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.918273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.918298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.918580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.918626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.918896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.918941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.919202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.919228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.919470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.919513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.919780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.919810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.920067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.920092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.920317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.920360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.920585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.920629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.920864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.920908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.921182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.921208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.921436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.921479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.921742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.921785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.922054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.922082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.922322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.922363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.548 [2024-07-22 17:00:19.922626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.548 [2024-07-22 17:00:19.922670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.548 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.922901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.922945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.923163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.923204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.923449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.923492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.923764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.923807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.924097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.924123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.924450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.924662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.924705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.924925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.924951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.925218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.925244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.925527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.925570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.925823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.925865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.926161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.926187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.926465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.926508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.926745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.926787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.927065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.927106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.927372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.927415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.927651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.927694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.927980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.928007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.928223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.928249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.928441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.928483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.928749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.928793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.929024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.929049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.929282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.929306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.929519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.929563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.929847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.929891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.930146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.549 [2024-07-22 17:00:19.930171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.549 qpair failed and we were unable to recover it. 00:47:00.549 [2024-07-22 17:00:19.930382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.930424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.930669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.930712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.930987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.931013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.931251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.931276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.931552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.931593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.931879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.931922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.932191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.932216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.932420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.932461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.932673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.932717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.932962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.932997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.933271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.933297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.933543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.933592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.933892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.933943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.934204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.934229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.934519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.934562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.934825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.934869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.935114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.935140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.935419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.935463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.935723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.935767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.936004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.936029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.936313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.936339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.936555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.936599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.936788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.936830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.937060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.937086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.937369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.937412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.937640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.937683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.937915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.937941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.938191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.938233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.938532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.938574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.938828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.938870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.939142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.939183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.939447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.939491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.939756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.939800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.940006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.940033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.940262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.940288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.940521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.940564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.940802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.550 [2024-07-22 17:00:19.940844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.550 qpair failed and we were unable to recover it. 00:47:00.550 [2024-07-22 17:00:19.941106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.941132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.941369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.941412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.941623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.941647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.941913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.941938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.942166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.942192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.942435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.942479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.942703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.942745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.943024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.943051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.943284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.943310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.943561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.943603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.943864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.943907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.944118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.944145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.944340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.944383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.944658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.944703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.944930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.944983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.945214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.945239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.945485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.945529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.945815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.945858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.946086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.946111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.946386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.946429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.946626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.946669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.946916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.946941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.947240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.947266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.947544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.947590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.947826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.947869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.948172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.948413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.948455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.948683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.948726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.949025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.949051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.551 [2024-07-22 17:00:19.949295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.551 [2024-07-22 17:00:19.949339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.551 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.949570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.949613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.949826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.949857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.950100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.950127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.950340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.950383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.950591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.950620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.950852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.950892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.951104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.951131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.951369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.951415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.951679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.951721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.951978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.952003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.952275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.952300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.952591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.952635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.952891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.952938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.953225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.953251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.953521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.953564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.953802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.953844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.954079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.954105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.954368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.954412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.954656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.954699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.954962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.955013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.955285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.955324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.955545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.955588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.955820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.955862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.956132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.956158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.956412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.956462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.956720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.956762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.957034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.957059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.957316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.957341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.957541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.957571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.957820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.957863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.958076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.958101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.958342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.958385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.958578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.958620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.958881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.958924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.959189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.552 [2024-07-22 17:00:19.959231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.552 qpair failed and we were unable to recover it. 00:47:00.552 [2024-07-22 17:00:19.959500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.959543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.959773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.959816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.960030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.960056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.960301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.960326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.960538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.960583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.960852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.960896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.961128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.961156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.961358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.961402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.961676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.961719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.961933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.961958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.962203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.962228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.962430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.962471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.962709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.962754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.963044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.963070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.963299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.963341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.963627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.963670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.963909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.963953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.964258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.964283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.964519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.964561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.964825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.964867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.965157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.965183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.965444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.965487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.965759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.965804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.966049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.966091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.966334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.966358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.966595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.966637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.966908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.966951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.967206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.967232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.967433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.967477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.967730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.967776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.968039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.968065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.968299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.968323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.968576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.968619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.968851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.968895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.969168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.969194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.969428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.969472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.553 [2024-07-22 17:00:19.969691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.553 [2024-07-22 17:00:19.969735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.553 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.970012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.970038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.970275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.970300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.970553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.970595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.970859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.970906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.971190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.971216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.971468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.971511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.971788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.971830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.972060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.972099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.972302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.972347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.972620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.972665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.972876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.972900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.973141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.973166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.973394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.973437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.973676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.973720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.974026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.974052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.974244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.974285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.974527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.974570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.974812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.974854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.975093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.975118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.975364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.975407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.975599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.975643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.975922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.975946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.976247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.976273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.976526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.976569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.976856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.976899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.977140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.977165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.977412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.977455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.977668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.977712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.977992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.978031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.978268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.554 [2024-07-22 17:00:19.978293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.554 qpair failed and we were unable to recover it. 00:47:00.554 [2024-07-22 17:00:19.978549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.978592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.978830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.978881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.979162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.979210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.979459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.979501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.979784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.979827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.980056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.980082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.980353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.980383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.980636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.980678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.980887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.980913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.981158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.981183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.981406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.981448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.981703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.981746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.982019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.982044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.982261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.982304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.982537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.982581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.982815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.982858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.983131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.983156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.983389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.983433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.983658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.983701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.983972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.983999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.984233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.984273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.984550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.984594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.984803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.984846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.985119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.985145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.985402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.985453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.985727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.985775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.986014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.986040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.986261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.986284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.986497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.986548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.986764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.986806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.987091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.987116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.987390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.987435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.987722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.987772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.988031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.988057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.988257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.988300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.988515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.988558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.988801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.988853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.989088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.555 [2024-07-22 17:00:19.989113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.555 qpair failed and we were unable to recover it. 00:47:00.555 [2024-07-22 17:00:19.989368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.989411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.989637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.989679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.989909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.989933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.990221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.990248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.990489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.990538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.990793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.990837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.991109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.991135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.991392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.991434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.991715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.991759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.992031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.992057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.992324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.992375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.992605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.992648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.992925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.992975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.993214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.993239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.993529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.993578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.993751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.993794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.994020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.994045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.994280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.994305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.994562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.994612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.994876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.994918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.995176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.995201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.995428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.995472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.995790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.995852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.996089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.996115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.996385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.996427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.996690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.996733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.996917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.996941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.997174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.997201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.997429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.997473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.997713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.997757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.998000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.998026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.998303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.998328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.998547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.998589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.998863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.999129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.999155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.999385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.999427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.999694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.999737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:19.999956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.556 [2024-07-22 17:00:19.999994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.556 qpair failed and we were unable to recover it. 00:47:00.556 [2024-07-22 17:00:20.000243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.000284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.000566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.000609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.000837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.000879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.001087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.001116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.001386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.001430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.001686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.001734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.001984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.002022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.002265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.002309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.002554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.002599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.002809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.002854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.003111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.003140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.003349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.003393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.003649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.003702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.003908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.003935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.004204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.004232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.004488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.004531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.004765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.004816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.005066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.005094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.005311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.005363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.005607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.005649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.005886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.005911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.006173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.006202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.006411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.006455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.006626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.006669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.006943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.006996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.007220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.007248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.007477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.007522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.007801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.007844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.008099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.008129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.008379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.008423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.008668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.008711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.008975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.009001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.009235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.009276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.009496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.009539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.009801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.009846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.010114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.010141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.010398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.010448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.557 qpair failed and we were unable to recover it. 00:47:00.557 [2024-07-22 17:00:20.010679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.557 [2024-07-22 17:00:20.010722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.010996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.011024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.011224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.011251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.011501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.011531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.011820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.011865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.012093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.012120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.012351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.012394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.012590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.012640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.012873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.012899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.013130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.013164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.013453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.013498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.013785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.013834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.014066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.014094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.014336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.014379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.014643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.014686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.014977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.015005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.015217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.015259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.015486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.015537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.015826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.015870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.016075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.016103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.016362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.016405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.016648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.016690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.016922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.016948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.017213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.017242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.017461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.017505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.017748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.017791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.017993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.018019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.018238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.018284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.018535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.018578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.018829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.018873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.019074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.019102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.019371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.019423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.019619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.019661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.019903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.019927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.020217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.020260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.020505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.020570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.020827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.020874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.021153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.558 [2024-07-22 17:00:20.021181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.558 qpair failed and we were unable to recover it. 00:47:00.558 [2024-07-22 17:00:20.021428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.021471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.021759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.021818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.022039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.022067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.022304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.022346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.022547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.022591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.022847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.022894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.023135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.023161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.023353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.023396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.023703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.023747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.024027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.024055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.024285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.024328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.024574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.024617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.024880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.024924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.025230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.025278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.025530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.025573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.025814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.025857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.026132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.026160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.026455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.026509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.026740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.026783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.027050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.027076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.027315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.027357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.027591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.027642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.027922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.027970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.028200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.028225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.028502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.028547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.028835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.028891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.029154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.029196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.029473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.029517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.029790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.029832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.030061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.030087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.030356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.030400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.030660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.030707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.559 qpair failed and we were unable to recover it. 00:47:00.559 [2024-07-22 17:00:20.030946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.559 [2024-07-22 17:00:20.030991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.031216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.031242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.031477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.031507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.031800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.031843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.032080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.032105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.032386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.032436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.032674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.032722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.032979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.033004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.033241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.033271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.033555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.033601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.033861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.033906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.034151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.034177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.034432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.034479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.034711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.034786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.035005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.035033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.035304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.035350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.035595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.035643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.035882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.035908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.036145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.036173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.036468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.036514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.036798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.036844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.037112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.037143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.037364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.037409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.037615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.037644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.037886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.037911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.038197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.038224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.038481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.038524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.038771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.038815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.039044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.039070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.039302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.039346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.039539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.039582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.039801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.039843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.040067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.040114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.040382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.040425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.560 qpair failed and we were unable to recover it. 00:47:00.560 [2024-07-22 17:00:20.040674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.560 [2024-07-22 17:00:20.040705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.040978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.041005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.041265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.041293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.041579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.041621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.041872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.041916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.042180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.042207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.042489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.042533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.042815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.042859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.043108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.043133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.043341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.043382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.043629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.043674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.043943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.043992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.044229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.044258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.044516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.044560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.044805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.044848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.045078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.045104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.045287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.045330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.045564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.045607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.045858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.045901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.046122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.046149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.046373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.046416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.046677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.046721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.046914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.047189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.047215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.047419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.047463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.047755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.047798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.048041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.048069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.048310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.048354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.048582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.048612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.048835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.048860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.049112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.049156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.049375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.049418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.049685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.049729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.049991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.050018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.050262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.050288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.050505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.561 [2024-07-22 17:00:20.050547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.561 qpair failed and we were unable to recover it. 00:47:00.561 [2024-07-22 17:00:20.050784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.050827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.051074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.051100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.051357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.051401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.051607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.051650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.051891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.051916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.052193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.052219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.052438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.052480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.052712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.052756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.052988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.053016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.053230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.053270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.053485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.053528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.053735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.053779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.053977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.054202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.054228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.054416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.054461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.054662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.054706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.054934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.054986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.055192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.055220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.055469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.055512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.055747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.055793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.055998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.056039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.056240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.056265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.056510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.056554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.056742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.056786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.056976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.057002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.057182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.057207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.057443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.057486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.057652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.057694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.057932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.057957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.058218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.058247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.058470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.058520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.058719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.058767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.058961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.059004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.059190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.059222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.059396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.059444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.059663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.059714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.562 [2024-07-22 17:00:20.059913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.562 [2024-07-22 17:00:20.059948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.562 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.061430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.061471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.061718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.061759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.061990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.062030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.062249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.062277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.062571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.062615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.062889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.062938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.063250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.063278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.063535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.063584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.063862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.063914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.064211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.064240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.064527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.064579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.064844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.064889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.065133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.065161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.065428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.065473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.065708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.065753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.066032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.066076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.066330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.066375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.066573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.066845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.066896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.067165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.067198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.067488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.067533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.067754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.067797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.068079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.068107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.068296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.068350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.068574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.068617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.068847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.068891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.069108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.069136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.069329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.069372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.069525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.069554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.069717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.069747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.069991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.070034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.070221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.070266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.070532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.070577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.070809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.070852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.071120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.071146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.563 [2024-07-22 17:00:20.071330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.563 [2024-07-22 17:00:20.071374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.563 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.071646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.071675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.071938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.071987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.072218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.072246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.072483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.072758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.072802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.073043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.073071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.073297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.073340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.073572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.073615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.073844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.073887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.074118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.074162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.074434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.074477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.074722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.074770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.075032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.075059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.075242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.075285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.075469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.075512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.075704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.075754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.075898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.075923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.076127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.076155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.076332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.076360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.076520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.076576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.076738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.076789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.077029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.077236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.077451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.077655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.077841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.077986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.078014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.078175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.564 [2024-07-22 17:00:20.078219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.564 qpair failed and we were unable to recover it. 00:47:00.564 [2024-07-22 17:00:20.078398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.078441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.078583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.078611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.078800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.078825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.078996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.079042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.079212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.079255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.079430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.079459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.079670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.079739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.079888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.079912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.080132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.080176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.080345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.080387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.080539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.080594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.080742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.080767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.080894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.080933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.081181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.081226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.081436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.081478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.081730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.081779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.082026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.082053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.082200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.082244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.082398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.082451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.082715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.082762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.083006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.083034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.083197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.083240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.083411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.083454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.083622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.083664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.083795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.083820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.084041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.084072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.084245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.084275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.084447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.084505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.084640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.084688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.084853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.084877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.085066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.085111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.085314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.085356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.086207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.086237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.086417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.086441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.086618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.565 [2024-07-22 17:00:20.086662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.565 qpair failed and we were unable to recover it. 00:47:00.565 [2024-07-22 17:00:20.086835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.086879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.087054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.087085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.087358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.087401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.087627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.087671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.087841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.087867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.088068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.088113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.088287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.088329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.088573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.088618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.088841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.088879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.089089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.089133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.089315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.089358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.089592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.089636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.089818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.090038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.090069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.090301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.090345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.090508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.090561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.090784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.090809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.091000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.091042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.091214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.091266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.091547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.091592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.091800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.091844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.092041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.092086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.092268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.092309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.092488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.092535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.092715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.092769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.093010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.093038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.093176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.093220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.093401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.093445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.093665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.093709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.093960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.093995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.094156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.094182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.094332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.094361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.094614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.094668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.094861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.094902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.095111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.095139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.095310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.095354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.566 [2024-07-22 17:00:20.095524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.566 [2024-07-22 17:00:20.095566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.566 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.095750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.095793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.096024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.096054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.096236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.096287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.096491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.096541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.096768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.096812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.097038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.097083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.097301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.097348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.097595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.097640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.097895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.097934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.098136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.098179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.098410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.098454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.098685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.098729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.098926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.098974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.099150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.099177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.099373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.099416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.099591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.099635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.099818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.099843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.100068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.100113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.100360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.100402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.100598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.100640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.100830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.100871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.101033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.101063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.101250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.101293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.101493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.101525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.101761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.101813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.102033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.102077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.102209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.102253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.102475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.102521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.102745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.102791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.103047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.103074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.103259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.103308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.103528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.103572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.103765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.103810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.104007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.567 [2024-07-22 17:00:20.104035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.567 qpair failed and we were unable to recover it. 00:47:00.567 [2024-07-22 17:00:20.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.104258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.104496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.104548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.104750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.104799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.104988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.105015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.105203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.105247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.105480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.105532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.105728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.105775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.105954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.105989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.106196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.106223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.106432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.106482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.106705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.106749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.106946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.106979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.107180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.107237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.107464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.107508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.107777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.107821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.108029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.108057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.108305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.108349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.108537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.108584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.108795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.108841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.109012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.109050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.109259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.109301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.109507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.109539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.109723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.109750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.109926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.109953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.110134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.110179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.110417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.110460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.110686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.110737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.110926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.110954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.111133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.111177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.111352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.111404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.111578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.111621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.111786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.111812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.111992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.112036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.112204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.112250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.112433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.112475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.112645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.112689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.112850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.112876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.113042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.568 [2024-07-22 17:00:20.113087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.568 qpair failed and we were unable to recover it. 00:47:00.568 [2024-07-22 17:00:20.113245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.113289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.113445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.113488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.113687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.113712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.113914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.113939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.114127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.114172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.114333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.114378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.114535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.114578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.114768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.114794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.114933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.114958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.115102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.115146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.115308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.115350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.115515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.115549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.115732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.115761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.115896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.115937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.116120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.116166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.116316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.116358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.116511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.116555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.116718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.116744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.116921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.116962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.117136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.117180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.117370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.117415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.117545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.117588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.117773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.117911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.117936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.118119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.118164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.118320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.118365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.118555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.118599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.118796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.118823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.118984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.119028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.119197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.119240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.119403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.119449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.569 [2024-07-22 17:00:20.119613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.569 [2024-07-22 17:00:20.119657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.569 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.119810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.119836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.119961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.119993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.120117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.120161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.120317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.120360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.120537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.120580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.120729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.120755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.120940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.121012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.121225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.121262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.121511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.121541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.121742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.121789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.121997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.122155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.122366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.122527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.122725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.122961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.122996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.123146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.123175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.123376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.123405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.123581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.123614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.123890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.124071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.124098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.124263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.124292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.124496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.124522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.124678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.124707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.124901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.124930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.125120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.125146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.125402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.125432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.125647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.125676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.125918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.125948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.126141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.126167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.126488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.126522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.126801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.126848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.127079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.127106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.127282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.127316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.127486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.127532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.127697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.127744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.127949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.127984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.128137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.570 [2024-07-22 17:00:20.128164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.570 qpair failed and we were unable to recover it. 00:47:00.570 [2024-07-22 17:00:20.128302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.128331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.128487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.128516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.128683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.128712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.128898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.128926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.129111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.129138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.129313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.129342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.129497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.129522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.129685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.129718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.129926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.129954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.130153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.130179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.130305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.130330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.130477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.130519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.130679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.130712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.130909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.130938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.131110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.131137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.131297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.131340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.131490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.131518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.131685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.131734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.131855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.131883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.132059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.132218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.132445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.132609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.132801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.132989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.133032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.133221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.133264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.133399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.133428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.133596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.133620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.133818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.133846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.133981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.134033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.134183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.134209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.134384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.134407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.134601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.134641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.134805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.134832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.135018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.135045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.135210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.135236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.135426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.135455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.135584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.135612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.571 qpair failed and we were unable to recover it. 00:47:00.571 [2024-07-22 17:00:20.135752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.571 [2024-07-22 17:00:20.135781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.135982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.136159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.136306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.136510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.136733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.136935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.136976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.137168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.137193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.137319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.137361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.137554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.137593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.137753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.137782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.137954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.138138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.138336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.138524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.138721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.138895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.138924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.139095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.139121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.139266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.139290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.139446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.139480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.139672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.139701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.139861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.139886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.140041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.140263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.140444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.140653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.140845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.140999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.141029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.141180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.141208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.141358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.141382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.141572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.141601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.141782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.141811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.141985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.142160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.142370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.142583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.142760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.142951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.142979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.572 [2024-07-22 17:00:20.143115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.572 [2024-07-22 17:00:20.143158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.572 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.143338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.143368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.143513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.143542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.143720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.143744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.143927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.143955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.144109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.144138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.144257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.144285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.144448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.144471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.144638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.144667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.144809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.144839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.145894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.145933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.146115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.146145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.146321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.146350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.146477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.146506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.146692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.146716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.146890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.146919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.147079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.147107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.147261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.147290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.147483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.147508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.147649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.147678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.147824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.147853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.148064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.148240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.148410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.148604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.148803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.148999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.149029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.149181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.149211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.573 qpair failed and we were unable to recover it. 00:47:00.573 [2024-07-22 17:00:20.149397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.573 [2024-07-22 17:00:20.149423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.149556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.149583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.149809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.149866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.150898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.150927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.151120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.151146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.151300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.151342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.151472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.151499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.151661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.151689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.151840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.151863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.152068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.152241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.152461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.152641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.152817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.152962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.153008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.153272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.153301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.153439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.153462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.153615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.153652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.153810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.153838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.153988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.154202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.154366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.154582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.154758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.154930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.154973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.155120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.155145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.155283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.155308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.155476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.155505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.155621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.155643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.155798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.155822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.156025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.156059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.156184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.156212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.156384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.156406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.574 [2024-07-22 17:00:20.156586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.574 [2024-07-22 17:00:20.156614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.574 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.156790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.156818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.156978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.157129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.157292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.157505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.157662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.157857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.157881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.158936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.158962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.159112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.159137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.159281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.159303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.159465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.159487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.159651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.159679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.159841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.159869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.160061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.160247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.160440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.160622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.160820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.160984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.161157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.161334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.161520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.161714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.161896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.161924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.162081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.162108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.162243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.162282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.162467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.162509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.162682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.162710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.162855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.162882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.163028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.163053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.163190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.163233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.163408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.163436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.575 [2024-07-22 17:00:20.163570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.575 [2024-07-22 17:00:20.163597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.575 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.163745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.163768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.163944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.163978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.164152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.164179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.164308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.164336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.164469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.164494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.164660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.164705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.165775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.165809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.165977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.166177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.166346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.166502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.166684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.166883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.166908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.167070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.167218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.167435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.167648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.167810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.167962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.168012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.168130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.168154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.168325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.168349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.168530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.168553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.168753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.168782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.576 [2024-07-22 17:00:20.168942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.576 [2024-07-22 17:00:20.169013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.576 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.169160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.169185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.169379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.169404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.169560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.169586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.169798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.169830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.170017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.170043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.170196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.170227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.170408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.170438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.170629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.170658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.852 [2024-07-22 17:00:20.170875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.852 [2024-07-22 17:00:20.170903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.852 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.171062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.171088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.171229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.171254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.171403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.171431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.171605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.171634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.171801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.171836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.172045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.172071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.172203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.172248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.172430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.172455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.172616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.172645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.172840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.172878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.173060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.173086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.173198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.173223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.173433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.173462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.173647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.173676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.173820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.173847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.174038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.174075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.174218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.174261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.174451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.174479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.174656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.174684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.174871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.174903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.175070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.175100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.175261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.175285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.175485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.175514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.175667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.175694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.175860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.175889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.176055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.176082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.176265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.176289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.176468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.176491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.176640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.176667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.176821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.176849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.177029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.177055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.177188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.177212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.177409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.177437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.177602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.177630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.177837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.177867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.178042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.178068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.178184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.178209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.179331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.179364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.179545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.179575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.179737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.179761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.180683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.180716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.180937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.180979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.181142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.181171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.181367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.181406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.181581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.181610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.181776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.181804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.181971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.182008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.182149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.182180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.182336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.182359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.182534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.182562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.182751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.182780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.853 qpair failed and we were unable to recover it. 00:47:00.853 [2024-07-22 17:00:20.183000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.853 [2024-07-22 17:00:20.183044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.183171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.183201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.183349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.183377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.183558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.183586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.183711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.183734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.183937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.183970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.184114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.184138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.184308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.184355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.184578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.184611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.184883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.184911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.185055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.185081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.185243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.185283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.185486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.185510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.185717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.185744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.185956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.185991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.186172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.186199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.186363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.186386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.186581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.186614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.186815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.186843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.187041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.187070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.187229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.187267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.187442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.187470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.187634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.187663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.187819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.187852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.188057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.188215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.188401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.188579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.188753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.188962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.189007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.189172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.189200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.189397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.189429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.189607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.189632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.189863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.189893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.190082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.190108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.190328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.190356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.190531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.190563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.190726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.190754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.190893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.190921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.191112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.191138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.191285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.191309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.191563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.191590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.191728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.191757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.191905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.191932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.192093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.192121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.192289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.192319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.192504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.192552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.192713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.192742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.192905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.192929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.193110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.193135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.193287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.193319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.193533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.193580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.854 qpair failed and we were unable to recover it. 00:47:00.854 [2024-07-22 17:00:20.193738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.854 [2024-07-22 17:00:20.193760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.193958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.193998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.194142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.194169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.194415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.194569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.194593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.194788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.194815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.194978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.195005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.195134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.195163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.195378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.195401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.195568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.195595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.195781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.195807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.196010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.196039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.196180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.196206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.196401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.196433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.196627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.196655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.196820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.196847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.197036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.197061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.197229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.197257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.197454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.197482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.197646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.197674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.197854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.197876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.198055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.198083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.198250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.198278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.198450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.198478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.198653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.198676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.198876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.198903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.199079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.199104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.199287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.199310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.199498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.199521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.199682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.199709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.199861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.199888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.200070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.200099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.200282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.200320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.200471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.200498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.200660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.200688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.200886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.200914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.201093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.201119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.201293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.201320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.201500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.201527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.201715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.201747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.201935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.201981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.202100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.202124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.202281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.202309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.202498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.202525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.202687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.202709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.202883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.202910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.203097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.203124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.203272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.203295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.203507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.203530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.203746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.203773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.203914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.203941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.204112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.855 [2024-07-22 17:00:20.204138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.855 qpair failed and we were unable to recover it. 00:47:00.855 [2024-07-22 17:00:20.204298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.204333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.204499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.204526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.204675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.204703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.204856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.204884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.205922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.205959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.206098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.206123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.206268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.206291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.206475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.206498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.206687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.206711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.206924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.206951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.207119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.207145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.207302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.207325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.207502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.207525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.207705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.207728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.207893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.207916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.208095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.208121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.208300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.208325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.208539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.208562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.208726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.208749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.208929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.208976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.209128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.209154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.209354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.209377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.209543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.209566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.209772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.209795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.209936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.210094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.210123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.210272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.210296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.210447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.210484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.210646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.210674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.210856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.210885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.211067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.211097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.211289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.211317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.211487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.211515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.211715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.211744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.211959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.211995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.212155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.212180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.212336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.212359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.212515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.212537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.212681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.212704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.212881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.856 [2024-07-22 17:00:20.212903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.856 qpair failed and we were unable to recover it. 00:47:00.856 [2024-07-22 17:00:20.213057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.213212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.213408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.213622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.213786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.213924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.213962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.214103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.214128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.214275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.214299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.214493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.214516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.214638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.214660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.214846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.214870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.215916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.215939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.216126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.216152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.216334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.216357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.216524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.216547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.216690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.216713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.216852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.216875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.217956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.217986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.218146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.218169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.218335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.218373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.218517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.218540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.218697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.218720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.218870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.218893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.219926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.219970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.220126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.220149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.220298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.220321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.220500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.220523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.220673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.220695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.220878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.220900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.221055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.221079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.221201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.221225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.221403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.221426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.221603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.221625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.221805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.221833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.222005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.222029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.222182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.222207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.222390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.222413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.857 [2024-07-22 17:00:20.222575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.857 [2024-07-22 17:00:20.222597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.857 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.222762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.222784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.222979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.223108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.223257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.223450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.223619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.223827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.223850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.224036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.224173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.224340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.224600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.224806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.224990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.225155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.225324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.225513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.225702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.225904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.225927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.226095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.226119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.226264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.226287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.226437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.226459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.226647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.226670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.226809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.226832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.227953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.227985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.228158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.228184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.228352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.228379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.228549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.228574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.228742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.228765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.228931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.228975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.229934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.229979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.230103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.230126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.230313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.230337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.230493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.230516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.230647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.230669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.230825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.230848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.858 [2024-07-22 17:00:20.231850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.858 [2024-07-22 17:00:20.231888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.858 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.232052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.232223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.232395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.232608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.232792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.232960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.233174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.233309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.233512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.233699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.233876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.233898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.234919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.234955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.235073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.235097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.235243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.235281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.235470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.235492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.235635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.235658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.235810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.235833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.236957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.236989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.237155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.237178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.237334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.237357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.237523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.237545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.237717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.237740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.237880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.237903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.238918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.238956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.239123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.239146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.239286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.239309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.239444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.239486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.239663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.239690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.239860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.239887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.859 [2024-07-22 17:00:20.240054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.859 [2024-07-22 17:00:20.240084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.859 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.240262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.240290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.240482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.240510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.240670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.240694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.240842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.240863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.241909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.241935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.242927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.242949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.243116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.243139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.243312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.243335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.243495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.243518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.243689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.243711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.243857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.243880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.244079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.244246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.244434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.244637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.244845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.244990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.245162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.245341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.245534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.245694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.245871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.245894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.246070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.246098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.246274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.246301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.246488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.246516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.246703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.246731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.246893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.246926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.247103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.247132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.247326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.247354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.247530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.247598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.247768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.247796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.247989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.248140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.248304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.248475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.248673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.248873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.248896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.249055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.249221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.249411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.249599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.860 [2024-07-22 17:00:20.249742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.860 qpair failed and we were unable to recover it. 00:47:00.860 [2024-07-22 17:00:20.249916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.249939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.250958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.250988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.251135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.251158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.251359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.251382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.251571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.251594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.251721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.251744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.251912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.251935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.252111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.252135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.252288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.252311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.252473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.252496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.252670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.252693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.252872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.252895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.253038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.253061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.253261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.253284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.253519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.253542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.253773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.253795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.253980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.254004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.254171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.254194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.254423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.254472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.254659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.254682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.254923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.254947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.255120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.255143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.255360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.255382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.255505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.255535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.255761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.255785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.256067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.256106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.256263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.256286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.256502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.256525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.256724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.256747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.256962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.256992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.257146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.257169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.257332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.257355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.257634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.257656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.257938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.257961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.258131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.258154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.258326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.258348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.258512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.258534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.258762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.258785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.259007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.259030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.259234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.259256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.259466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.259489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.259663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.259685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.259869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.259892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.260047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.260071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.260201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.260239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.260472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.260494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.260704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.861 [2024-07-22 17:00:20.260727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.861 qpair failed and we were unable to recover it. 00:47:00.861 [2024-07-22 17:00:20.260956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.261014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.261137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.261161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.261361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.261383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.261535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.261567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.261801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.261824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.262070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.262249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.262427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.262574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.262803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.262985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.263009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.263120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.263143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.263362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.263385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.263627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.263652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.263805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.263828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.264034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.264057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.264228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.264252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.264467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.264490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.264679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.264709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.264874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.264896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.265082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.265110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.265302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.265330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.265607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.265630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.265903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.265926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.266089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.266113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.266301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.266338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.266565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.266587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.266804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.266830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.267100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.267289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.267453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.267624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.267800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.267981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.268022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.268163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.268185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.268341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.268364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.268489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.268511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.268743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.268765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.269046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.269083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.269202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.269224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.269436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.269459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.269617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.269640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.269866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.269889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.270129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.270153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.270329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.270352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.270498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.270520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.270723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.270746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.270916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.270939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.271144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.271168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.271328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.862 [2024-07-22 17:00:20.271350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.862 qpair failed and we were unable to recover it. 00:47:00.862 [2024-07-22 17:00:20.271549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.271572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.271782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.271805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.272021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.272045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.272219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.272241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.272378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.272403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.272639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.272662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.272915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.272938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.273111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.273134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.273288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.273310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.273466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.273488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.273638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.273661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.273839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.273865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.274119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.274143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.274330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.274352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.274536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.274558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.274756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.274779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.274973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.274997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.275258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.275281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.275496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.275520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.275722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.275745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.275989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.276013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.276270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.276293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.276522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.276544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.276783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.276806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.276949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.276980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.277149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.277177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.277430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.277457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.277664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.277686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.277949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.277993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.278246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.278269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.278432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.278454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.278649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.278672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.278851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.278874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.279070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.279094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.279358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.279381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.279623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.279646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.279794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.279827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.280008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.280030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.280225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.280249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.280497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.280520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.280691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.280714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.280894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.280916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.281129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.281153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.281370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.281392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.281553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.281575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.281800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.281826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.281990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.282013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.282236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.282259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.863 [2024-07-22 17:00:20.282400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.863 [2024-07-22 17:00:20.282423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.863 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.282540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.282579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.282762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.282785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.282956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.282987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.283131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.283154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.283340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.283363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.283654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.283677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.283954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.283995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.284173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.284197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.284341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.284363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.284525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.284547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.284745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.284802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.285034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.285058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.285229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.285267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.285546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.285569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.285805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.285828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.285969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.286009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.286141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.286165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.286352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.286375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.286635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.286657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.286850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.286873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.287026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.287058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.287313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.287336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.287605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.287628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.287827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.287853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.288056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.288079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.288332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.288355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.288616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.288638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.288812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.288835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.289005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.289028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.289226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.289250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.289452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.289475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.289727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.289750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.290033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.290056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.290270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.290293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.290485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.290508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.290683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.290706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.290866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.290888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.291104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.291128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.291371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.291393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.291626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.291649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.291870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.291893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.292123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.292147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.292267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.292290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.292465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.292488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.292669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.292691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.292842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.292865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.293970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.293994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.864 qpair failed and we were unable to recover it. 00:47:00.864 [2024-07-22 17:00:20.294119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.864 [2024-07-22 17:00:20.294143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.294333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.294356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.294519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.294542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.294714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.294737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.294879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.294901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.295086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.295112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.295279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.295302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.295460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.295483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.295661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.295687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.295927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.295949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.296139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.296161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.296382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.296406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.296597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.296619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.296868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.296891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.297077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.297106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.297295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.297323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.297521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.297578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.297864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.297887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.298079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.298103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.298253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.298275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.298389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.298426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.298652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.298675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.298951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.298980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.299100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.299123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.299300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.299325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.299600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.299623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.299942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.299985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.300177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.300200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.300466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.300489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.300650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.300673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.300879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.300901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.301077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.301101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.301232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.301270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.301477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.301500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.301727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.301751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.301929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.301951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.302143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.302165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.302410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.302433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.302630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.302652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.302894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.302933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.303154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.303181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.303310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.303334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.303477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.303515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.303705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.303729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.303946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.303976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.304155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.304179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.304340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.304363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.304596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.304628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.304818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.304842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.305023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.305048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.305240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.305278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.305492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.305527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.305780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.865 [2024-07-22 17:00:20.305808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.865 qpair failed and we were unable to recover it. 00:47:00.865 [2024-07-22 17:00:20.306010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.306035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.306187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.306211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.306368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.306391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.306596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.306619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.306849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.306878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.307053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.307078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.307196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.307220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.307394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.307417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.307627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.307650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.307917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.307954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.308159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.308183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.308402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.308425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.308580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.308603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.308830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.308853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.309035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.309059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.309192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.309223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.309442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.309465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.309656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.309679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.309882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.309905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.310084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.310109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.310336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.310360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.310574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.310597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.310810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.310833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.311006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.311031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.311153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.311177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.311389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.311412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.311573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.311596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.311781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.311805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.312914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.312952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.313114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.313138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.313322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.313346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.313540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.313563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.313752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.313775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.313997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.314021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.314190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.314217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.314407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.314430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.314629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.314652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.314873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.314896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.315150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.866 [2024-07-22 17:00:20.315174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.866 qpair failed and we were unable to recover it. 00:47:00.866 [2024-07-22 17:00:20.315342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.315365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.315539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.315562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.315775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.315798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.315999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.316023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.316216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.316259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.316497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.316521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.316798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.316821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.317102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.317126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.317384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.317407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.317612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.317636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.317832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.317856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.318091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.318115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.318331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.318354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.318526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.318549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.318792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.318840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.318988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.319032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.319254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.319277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.319502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.319525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.319797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.319820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.320040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.320065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.320241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.320278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.320426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.320450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.320667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.320694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.320888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.320911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.321068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.321093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.321273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.321297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.321476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.321499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.321692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.321715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.321973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.322012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.322269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.322293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.322451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.322474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.322627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.322651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.322836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.322869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.323040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.323066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.323221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.323259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.323449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.323471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.323697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.323720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.323947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.323992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.324194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.324218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.324479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.324512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.324736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.324760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.324924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.324947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.325152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.325175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.325456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.325479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.325707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.325730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.325891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.325914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.326067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.326100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.326362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.326613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.326637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.326882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.326905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.327151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.327176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.867 qpair failed and we were unable to recover it. 00:47:00.867 [2024-07-22 17:00:20.327448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.867 [2024-07-22 17:00:20.327471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.327750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.327778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.328047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.328072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.328236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.328259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.328498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.328521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.328731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.328754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.329020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.329044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.329282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.329305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.329550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.329573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.329877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.329900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.330117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.330142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.330398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.330430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.330709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.330732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.330990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.331014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.331185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.331208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.331476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.331499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.331758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.331780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.332043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.332067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.332332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.332355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.332589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.332612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.332817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.332840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.333123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.333147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.333412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.333435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.333624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.333653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.333896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.333925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.334231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.334256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.334519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.334542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.334767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.334791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.334947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.334989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.335199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.335223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.335427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.335450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.335728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.335751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.335988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.336027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.336260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.336284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.336517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.336566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.336856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.336906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.337193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.337217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.337498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.337521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.337768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.337792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.337960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.338012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.338194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.338221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.338493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.338517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.338758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.338781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.339055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.339079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.339353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.339376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.339640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.339663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.339949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.339978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.340229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.340463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.340486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.340684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.340712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.341007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.341037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.341313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.341340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.868 [2024-07-22 17:00:20.341513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.868 [2024-07-22 17:00:20.341536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.868 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.341735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.341765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.341976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.341999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.342184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.342207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.342481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.342504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.342753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.342776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.343042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.343066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.343339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.343363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.343612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.343635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.343884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.343907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.344213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.344238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.344503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.344527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.344743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.344765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.345037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.345062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.345338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.345361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.345559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.345582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.345801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.345823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.346075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.346099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.346340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.346363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.346502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.346525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.346778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.346801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.347082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.347106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.347345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.347368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.347603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.347626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.347850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.347873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.348146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.348171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.348473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.348496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.348770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.348793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.349027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.349051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.349311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.349334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.349627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.349650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.349876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.349899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.350052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.350184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.350218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.350433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.350456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.350729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.350753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.350994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.351018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.351307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.351331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.351585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.351608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.351885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.351915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.352203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.352228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.352492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.352516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.352758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.352782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.353064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.353094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.353400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.353449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.869 qpair failed and we were unable to recover it. 00:47:00.869 [2024-07-22 17:00:20.353742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.869 [2024-07-22 17:00:20.353770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.354028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.354052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.354315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.354339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.354531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.354558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.354794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.354817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.355050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.355074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.355351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.355380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.355665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.355688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.355920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.355943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.356198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.356228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.356522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.356545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.356781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.356804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.357078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.357103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.357327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.357351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.357485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.357512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.357780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.357803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.358069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.358093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.358393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.358416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.358716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.358740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.359008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.359032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.359308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.359332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.359615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.359639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.359871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.359894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.360067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.360097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.360300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.360328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.360613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.360662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.360951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.360989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.361230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.361267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.361517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.361540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.361740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.361763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.362041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.362065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.362314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.362337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.362600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.362623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.362871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.362894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.363168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.363196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.363492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.363544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.363828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.363856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.364113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.364137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.364387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.364416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.364713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.364735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.364909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.364932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.365181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.365206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.365429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.365452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.365712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.365734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.365982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.366007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.366242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.366272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.366496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.366519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.366804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.366854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.367149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.367174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.367423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.367446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.367735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.367758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.367986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.870 [2024-07-22 17:00:20.368009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.870 qpair failed and we were unable to recover it. 00:47:00.870 [2024-07-22 17:00:20.368245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.368268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.368513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.368536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.368821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.368844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.369139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.369168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.369398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.369421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.369651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.369674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.369946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.369973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.370183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.370206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.370403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.370426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.370585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.370613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.370892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.370916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.371183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.371212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.371490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.371518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.371806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.371830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.372101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.372125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.372378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.372400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.372688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.372711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.372940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.372969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.373245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.373268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.373534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.373558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.373830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.373853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.374149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.374179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.374462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.374489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.374752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.374799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.375057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.375081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.375313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.375337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.375605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.375629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.375908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.375931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.376226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.376250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.376497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.376546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.376780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.376833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.377111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.377136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.377421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.377450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.377645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.377669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.377956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.377986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.378266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.378289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.378572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.378596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.378870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.378893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.379179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.379204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.379498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.379521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.379770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.379793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.380021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.380045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.380331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.380360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.380587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.380638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.380909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.380937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.381009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141a0f0 (9): Bad file descriptor 00:47:00.871 [2024-07-22 17:00:20.381266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.381303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.381588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.381612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.381834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.381858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.382033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.382058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.382267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.382290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.382451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.871 [2024-07-22 17:00:20.382474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.871 qpair failed and we were unable to recover it. 00:47:00.871 [2024-07-22 17:00:20.382680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.382703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.382960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.382990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.383235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.383273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.383562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.383585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.383844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.383867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.384137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.384161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.384387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.384410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.384685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.384707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.384959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.384988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.385248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.385271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.385513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.385535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.385809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.385832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.386041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.386065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.386290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.386313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.386582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.386605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.386878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.386900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.387172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.387196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.387437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.387465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.387724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.387773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.388059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.388083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.388355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.388378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.388583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.388632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.388886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.388935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.389207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.389235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.389420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.389448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.389668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.389691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.389933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.389976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.390260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.390283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.390548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.390570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.390795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.390818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.391058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.391091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.391349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.391399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.391695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.391723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.391975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.391999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.392222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.392259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.392509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.392532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.392736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.392759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.393025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.393048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.393305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.393331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.393607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.393629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.393902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.393924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.394179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.394204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.394462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.394490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.394753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.394775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.395052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.395077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.395322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.395350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.395544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.395600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.395886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.395914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.396159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.396183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.396436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.396458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.396650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.872 [2024-07-22 17:00:20.396673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.872 qpair failed and we were unable to recover it. 00:47:00.872 [2024-07-22 17:00:20.396811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.396834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.396973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.396998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.397275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.397298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.397517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.397540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.397727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.397749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.397988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.398013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.398190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.398213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.398473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.398496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.398744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.398767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.398960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.398997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.399300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.399328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.399609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.399637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.399931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.399953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.400174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.400198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.400484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.400511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.400777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.400800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.400994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.401017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.401293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.401316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.401578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.401601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.401877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.401900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.402144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.402168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.402400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.402423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.402668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.402691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.402908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.402931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.403144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.403173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.403461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.403483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.403780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.403808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.404097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.404121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.404375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.404398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.404586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.404619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.404896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.404919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.405167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.405191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.405386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.405409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.405657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.405681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.405860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.405888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.406086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.406115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.406319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.406347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.406551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.406574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.406837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.406860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.407127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.407151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.407431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.407455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.407679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.407705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.407936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.407958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.408251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.408275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.408482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.873 [2024-07-22 17:00:20.408505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.873 qpair failed and we were unable to recover it. 00:47:00.873 [2024-07-22 17:00:20.408768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.408790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.408991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.409016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.409194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.409222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.409476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.409499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.409794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.409842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.410133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.410158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.410371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.410394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.410629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.410652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.410940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.410974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.411232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.411255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.411515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.411567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.411845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.411868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.412148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.412172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.412363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.412386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.412599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.412621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.412791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.412814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.412988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.413011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.413288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.413326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.413507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.413530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.413801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.413823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.414090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.414114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.414352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.414375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.414636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.414659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.414925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.414953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.415268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.415291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.415538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.415561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.415837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.415860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.416115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.416140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.416322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.416357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.416549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.416573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.416879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.416903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.417153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.417177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.417341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.417364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.417641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.417664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.417906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.417929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.418194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.418220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.418477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.418501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.418760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.418786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.418941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.418971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.419204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.419226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.419483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.419507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.419733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.419757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.420024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.420049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.420288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.420311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.420569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.420592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.420869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.420892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.421144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.421169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.421399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.421423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.421673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.421695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.421910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.421933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.422086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.422110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.874 [2024-07-22 17:00:20.422336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.874 [2024-07-22 17:00:20.422365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.874 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.422639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.422690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.422990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.423015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.423225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.423249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.423507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.423529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.423821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.423844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.424066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.424092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.424284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.424308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.424579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.424602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.424868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.425077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.425101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.425322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.425345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.425572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.425596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.425829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.425855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.426116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.426141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.426365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.426388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.426597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.426622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.426832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.426871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.427108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.427132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.427393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.427416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.427692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.427720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.427957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.428008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.428192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.428216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.428467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.428490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.428729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.428754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.429019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.429044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.429315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.429338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.429602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.429626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.429863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.429888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.430075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.430109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.430344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.430372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.430615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.430666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.430932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.430961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.431255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.431278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.431535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.431559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.431741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.431764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.432019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.432043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.432274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.432316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.432552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.432577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.432831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.432855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.433131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.433167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.433409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.433438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.433721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.433745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.434021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.434045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.434307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.434332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.434456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.434479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.434671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.434698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.434979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.435005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.435244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.435267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.435503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.435526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.435750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.435776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.436054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.436080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.875 [2024-07-22 17:00:20.436290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.875 [2024-07-22 17:00:20.436313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.875 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.436554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.436577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.436832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.436860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.437129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.437161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.437427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.437450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.437687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.437709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.437992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.438035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.438293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.438316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.438573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.438596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.438867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.438890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.439178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.439204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.439442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.439467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.439704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.439741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.440006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.440031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.440266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.440289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.440579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.440638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.440934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.440995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.441250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.441273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.441507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.441530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.441808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.441833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.442114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.442139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.442417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.442440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.442720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.442743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.442982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.443006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.443273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.443299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.443578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.443602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.443858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.443882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.444172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.444196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.444446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.444470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.444742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.444765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.444989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.445028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.445214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.445238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.445498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.445520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.445745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.445768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.446022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.446051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.446339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.446377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.446681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.446705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.446893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.446916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.447198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.447226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.447505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.447529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.447750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.447774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.448026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.448051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.448274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.448296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.448578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.448601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.448845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.448868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.449082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.449108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.449396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.876 [2024-07-22 17:00:20.449425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.876 qpair failed and we were unable to recover it. 00:47:00.876 [2024-07-22 17:00:20.449607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.449630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.449884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.449909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.450178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.450203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.450455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.450478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.450725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.450774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.451034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.451059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.451333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.451362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.451648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.451672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.451961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.451991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.452264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.452296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.452580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.452603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.452889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.452912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.453178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.453203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.453477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.453525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.453766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.453790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.454080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.454104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.454320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.454343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.454615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.454638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.454916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.454939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.455186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.455212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.455457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.455481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.455740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.455768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.456055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.456078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.456284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.456312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.456603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.456631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.456932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.456991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.457288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.457311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.457562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.457585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.457840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.458064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.458088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.458318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.458341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.458623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.458647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.458889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.458912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.459150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.459175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.459446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.459497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.459783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.459811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.460062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.460090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.460281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.460305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.460577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.460600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.460857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.460895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.461164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.461203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.461474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.461496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.461724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.461748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.462012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.462050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.462264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.462287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.462547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.462575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.462841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.462891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.463170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.463199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.463483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.463507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.463791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.463815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.464093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.464118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.464353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.464377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.877 qpair failed and we were unable to recover it. 00:47:00.877 [2024-07-22 17:00:20.464637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.877 [2024-07-22 17:00:20.464659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.464900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.464928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.465167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.465191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.465482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.465530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.465767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.465795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.466083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.466107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.466379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.466402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.466669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.466692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.466920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.466943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.467236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.467275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.467554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.467577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.467789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.467813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.468098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.468123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.468395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.468418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.468679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.468708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.468997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.469020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.469248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.469272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.469562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.469586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.469846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.469869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.470107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.470134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.470369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.470398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.470690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.470713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.470959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.470996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.471264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.471288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.471557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.471580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.471812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.471836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.472078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.472102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.472312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.472334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.472571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.472594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.472867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.472892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.473133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.473173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.473433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.473465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.473688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.473717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.473999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.474043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.474242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.474265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.474534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.474571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.474777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.474800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.475043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.475067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.475290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.475315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.475575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.475598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.475826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.475849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.476100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.476130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.476408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.476458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.476735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.476758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.477040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.477064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.477300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.477323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.477545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.477568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.477817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.477843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.478127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.478152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.478436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.478460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.478681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.478709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.478940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.478977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.479283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.479310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.878 qpair failed and we were unable to recover it. 00:47:00.878 [2024-07-22 17:00:20.479564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.878 [2024-07-22 17:00:20.479588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.479808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.479831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.480090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.480114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.480375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.480399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.480660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.480683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.480906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.480934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.481237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.481266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.481558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.481611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.481898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.481926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.482216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.482244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.482527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.482551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.482822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.482845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.483110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.483135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.483359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.483383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.483669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.483693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.483977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.484002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.484225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.484263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.484525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.484548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.484827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.484853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.485065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.485252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.485443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.485647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.485826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.485957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.486006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.486184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.486223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.486412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.486440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.486609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.486633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.486775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.486813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.486972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.487135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.487304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.487464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.487687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.487924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.487948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.488130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.488157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.488433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.488457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.488684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.488722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.488917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.488943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:00.879 [2024-07-22 17:00:20.489090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:00.879 [2024-07-22 17:00:20.489117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:00.879 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.489288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.489314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.489506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.489548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.489820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.489844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.490072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.490100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.490276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.490302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.156 [2024-07-22 17:00:20.490572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.156 [2024-07-22 17:00:20.490598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.156 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.490856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.490886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.491073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.491100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.491229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.491255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.491443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.491469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.491756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.491786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.492034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.492059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.492232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.492257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.492501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.492532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.492813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.492838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.493057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.493083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.493253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.493280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.493524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.493548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.493808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.493833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.494075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.494100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.494250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.494288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.494469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.494494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.494647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.494671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.494921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.494944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.495174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.495202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.495433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.495457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.495654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.495678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.495922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.495951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.496126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.496163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.496436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.496461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.496749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.496789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.497056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.497082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.497263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.497301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.497511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.497536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.497763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.497788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.497992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.498033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.498164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.498191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.498341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.498366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.498544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.498568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.498820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.498847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.499092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.499121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.157 qpair failed and we were unable to recover it. 00:47:01.157 [2024-07-22 17:00:20.499299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.157 [2024-07-22 17:00:20.499323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.499490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.499512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.499791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.499814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.500000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.500031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.500210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.500238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.500468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.500493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.500746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.500770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.501029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.501055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.501245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.501270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.501395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.501421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.501691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.501717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.501980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.502019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.502160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.502186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.502368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.502394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.502647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.502672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.502872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.502899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.503072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.503098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.503326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.503351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.503560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.503585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.503836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.503862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.504075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.504101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.504247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.504273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.504466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.504492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.504688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.504713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.504933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.504958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.505144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.505170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.505464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.505504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.505729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.505757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.505983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.506021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.506171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.506196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.506415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.506447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.506666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.506694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.506836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.506860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.507070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.507098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.507296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.507338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.507564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.507613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.507809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.507833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.507978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.508169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.508316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.508523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.508726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.508902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.508926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.509122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.509149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.509323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.509352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.509598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.509649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.509840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.509865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.510004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.510041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.510192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.510218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.510415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.510444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.510657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.510708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.510877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.510901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.511089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.511116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.511234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.511267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.511520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.511575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.511863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.511886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.512067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.512093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.512263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.512306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.512488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.512512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.158 qpair failed and we were unable to recover it. 00:47:01.158 [2024-07-22 17:00:20.512686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.158 [2024-07-22 17:00:20.512710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.512873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.512899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.513120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.513146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.513422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.513450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.513695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.513741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.514015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.514041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.514189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.514217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.514350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.514392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.514598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.514623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.514857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.514881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.515086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.515270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.515449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.515655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.515834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.515989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.516849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.516986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.517016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.517166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.517192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.517355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.517392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.517560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.517584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.517756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.517780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.518148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.518332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.518509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.518689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.518863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.518888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.519957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.519996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.520115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.520140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.520319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.520342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.520500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.520523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.520678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.520702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.520893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.520916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.521086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.521112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.521274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.521299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.521470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.521494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.521638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.521675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.521859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.521883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.522052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.522085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.522261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.522300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.522438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.522478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.522635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.522674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.522819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.522847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.523057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.523228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.523425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.523629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.523843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.523986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.524152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.524367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.524564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.524745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.159 [2024-07-22 17:00:20.524943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.159 [2024-07-22 17:00:20.524973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.159 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.525136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.525162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.525314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.525338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.525501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.525540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.525691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.525715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.525838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.525861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.526031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.526231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.526409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.526611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.526825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.526982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.527176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.527402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.527587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.527784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.527927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.527981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.528174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.528357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.528506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.528686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.528856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.528994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.529160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.529369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.529586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.529771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.529968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.529993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.530160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.530185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.530340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.530365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.530557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.530597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.530748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.530773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.530937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.530972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.531112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.531276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.531441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.531632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.531814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.531976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.532147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.532340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.532555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.532745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.532922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.532960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.533141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.533167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.533324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.533348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.533493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.533519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.533670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.533695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.533870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.533894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.534078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.534103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.534294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.534320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.534461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.534485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.534620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.534644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.534824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.534857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.160 [2024-07-22 17:00:20.535052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.160 [2024-07-22 17:00:20.535079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.160 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.535252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.535292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.535445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.535471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.535628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.535653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.535800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.535841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.535975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.536170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.536347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.536550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.536719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.536921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.536944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.537094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.537133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.537318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.537344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.537494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.537518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.537696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.537720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.537896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.537927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.538084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.538116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.538275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.538299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.538468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.538492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.538674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.538711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.538895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.538920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.539072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.539098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.539272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.539297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.539490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.539515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.539704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.539727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.539869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.539898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.540119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.540292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.540476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.540633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.540833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.540995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.541150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.541288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.541493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.541695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.541849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.541875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.542021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.542047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.542203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.542243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.542456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.542482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.542655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.542679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.542834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.542857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.543031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.543056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.543227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.543266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.543434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.543458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.543613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.543651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.543809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.543832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.544917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.544941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.545107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.545136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.545311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.545334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.545501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.545526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.545664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.545687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.545836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.545876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.546046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.546088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.546224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.546265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.546382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.546406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.161 [2024-07-22 17:00:20.546601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.161 [2024-07-22 17:00:20.546624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.161 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.546758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.546782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.546944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.546980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.547108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.547133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.547302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.547326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.547506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.547530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.547682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.547707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.547878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.547907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.548086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.548111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.548261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.548285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.548485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.548509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.548691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.548715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.548882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.548905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.549075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.549100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.549270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.549294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.549458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.549481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.549639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.549662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.549837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.549860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.550857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.550883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.551939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.551971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.552132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.552159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.552327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.552350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.552483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.552511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.552671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.552712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.552875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.552904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.553063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.553092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.553293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.553321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.553465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.553507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.553706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.553738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.553940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.553977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.554175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.554206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.554382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.554410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.554551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.554574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.554712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.554755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.554943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.554978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.555105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.555130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.555295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.555333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.555496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.555524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.555697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.555725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.555898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.555926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.556125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.556156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.556341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.556371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.556524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.556567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.556745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.556773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.162 qpair failed and we were unable to recover it. 00:47:01.162 [2024-07-22 17:00:20.557000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.162 [2024-07-22 17:00:20.557030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.557229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.557254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.557457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.557481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.557624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.557662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.557854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.557880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.558072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.558097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.558281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.558322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.558475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.558499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.558686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.558710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.558822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.558846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.559043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.559263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.559469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.559610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.559815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.559986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.560177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.560315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.560481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.560660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.560890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.560930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.561095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.561120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.561261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.561299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.561466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.561489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.561647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.561669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.561850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.561876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.562066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.562090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.562274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.562297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.562460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.562484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.562648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.562672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.562849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.562872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.563059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.563086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.563263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.563300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.563471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.563498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.563652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.563675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.563852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.563880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.564048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.564073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.564261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.564300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.564488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.564511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.564691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.564713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.564876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.564904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.565104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.565268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.565456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.565630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.565810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.565992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.566016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.163 [2024-07-22 17:00:20.566180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.163 [2024-07-22 17:00:20.566204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.163 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.566373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.566397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.566552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.566575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.566735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.566758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.566902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.566941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.567132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.567158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.567277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.567302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.567504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.567527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.567681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.567705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.567837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.567876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.568073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.568289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.568486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.568669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.568853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.568989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.569157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.569331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.569540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.569721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.569935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.569974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.570137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.570302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.570482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.570632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.570852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.570998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.571154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.571363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.571515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.571738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.571935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.571980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.572179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.572206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.572363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.572388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.572501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.572539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.572704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.572728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.572885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.572913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.573079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.573103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.573294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.573332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.573492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.573524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.573703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.573732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.573912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.573941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.574156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.574184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.574385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.574436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.574590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.574633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.574782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.574811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.574988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.575185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.575362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.575541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.575757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.575899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.575924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.576115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.576139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.576309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.576333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.164 [2024-07-22 17:00:20.576516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.164 [2024-07-22 17:00:20.576543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.164 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.576690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.576713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.576883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.576912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.577089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.577114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.577265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.577289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.577449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.577473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.577601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.577627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.577824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.578931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.578959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.579119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.579145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.579298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.579326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.579490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.579519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.579694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.579722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.579872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.579901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.580923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.580947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.581123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.581147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.581299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.581339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.581503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.581531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.581702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.581724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.581905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.581933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.582129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.582302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.582480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.582672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.582827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.582987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.583150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.583299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.583479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.583697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.583893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.583915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.584139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.584288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.584466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.584663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.584852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.584989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.585165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.585327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.585501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.585701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.585842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.585879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.586039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.586217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.586417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.586630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.586805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.586950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.165 [2024-07-22 17:00:20.587005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.165 qpair failed and we were unable to recover it. 00:47:01.165 [2024-07-22 17:00:20.587201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.587225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.587352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.587375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.587530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.587553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.587683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.587719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.587825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.587848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.587999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.588171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.588351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.588526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.588678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.588851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.588879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.589053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.589252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.589411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.589626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.589791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.589989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.590156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.590344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.590515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.590723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.590884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.590912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.591975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.591999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.592165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.592189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.592360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.592383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.592533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.592555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.592706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.592728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.592908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.592936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.593120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.593155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.593324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.593367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.593544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.593587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.593728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.593770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.593929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.593953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.594113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.594151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.594303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.594333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.594509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.594536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.594692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.594720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.594900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.594922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.595113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.595138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.595276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.595299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.595480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.595508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.595685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.595713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.595895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.595923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.596131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.596168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.596341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.596365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.596510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.596551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.166 qpair failed and we were unable to recover it. 00:47:01.166 [2024-07-22 17:00:20.596712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.166 [2024-07-22 17:00:20.596753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.596888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.596912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.597058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.597083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.597259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.597301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.597442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.597470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.597638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.597679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.597843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.597866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.598068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.598248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.598438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.598661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.598837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.598978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.599011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.599197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.599243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.599430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.599471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.599594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.599617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.599799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.599823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.600031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.600059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.600233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.600275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.600432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.600474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.600663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.600705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.600870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.600893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.601082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.601123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.601268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.601308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.601452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.601480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.601678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.601721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.601877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.601902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.602064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.602105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.602257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.602298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.602479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.602522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.602694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.602717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.602905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.602929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.603087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.603129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.603269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.603311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.603494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.603535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.603716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.603760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.603924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.603946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.604103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.604144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.604297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.604339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.604510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.604552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.604742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.604784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.604933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.604955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.605142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.605183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.605361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.605403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.605596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.605652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.605835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.605858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.606071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.606288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.606492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.606677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.606838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.606996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.607167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.607388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.607573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.607761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.607941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.167 [2024-07-22 17:00:20.607968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.167 qpair failed and we were unable to recover it. 00:47:01.167 [2024-07-22 17:00:20.608133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.608157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.608317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.608345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.608534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.608577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.608725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.608762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.608927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.608972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.609145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.609169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.609342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.609398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.609586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.609627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.609805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.609829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.609993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.610194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.610399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.610562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.610764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.610971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.610995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.611134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.611177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.611614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.611639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.611795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.611820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.611948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.612196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.612382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.612540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.612699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.612881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.612905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.613084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.613109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.613277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.613301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.613466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.613491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.613683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.613725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.613879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.613903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.614105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.614148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.614318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.614359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.614526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.614553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.614733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.614757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.614883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.614907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.615104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.615148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.615269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.615297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.615496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.615539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.615713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.615736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.615917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.615940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.616147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.616191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.616372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.616415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.616553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.616580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.616742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.616765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.616921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.616959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.617098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.617139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.617273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.617315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.617454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.617483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.617664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.617700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.617840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.617863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.618067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.618220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.618415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.618621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.618797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.168 [2024-07-22 17:00:20.618977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.168 [2024-07-22 17:00:20.619002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.168 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.619146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.619170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.619334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.619358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.619495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.619536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.619684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.619726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.619878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.619915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.620061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.620086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.620260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.620303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.620437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.620465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.620651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.620687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.620863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.620885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.621060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.621089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.621253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.621293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.621466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.621507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.621678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.621719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.621862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.621884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.622057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.622086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.622261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.622302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.622477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.622517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.622680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.622702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.622893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.622916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.623111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.623154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.623303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.623344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.623496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.623537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.623714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.623736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.623867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.623890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.624065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.624093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.624240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.624267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.624436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.624463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.624662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.624703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.624844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.624867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.625066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.625235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.625454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.625623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.625854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.625998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.626038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.626207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.626254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.626453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.626495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.626649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.626671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.626825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.626847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.627040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.627068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.627238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.627280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.627416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.627444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.627625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.627665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.627850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.627873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.628006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.628032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.628180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.628223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.628388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.628429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.628592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.628634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.628808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.169 [2024-07-22 17:00:20.628832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.169 qpair failed and we were unable to recover it. 00:47:01.169 [2024-07-22 17:00:20.629045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.629088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.629225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.629268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.629383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.629420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.629586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.629608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.629769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.629793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.629990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.630147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.630322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.630532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.630726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.630932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.630974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.631175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.631217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.631402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.631443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.631618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.631659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.631834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.631857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.632027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.632056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.632268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.632309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.632486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.632528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.632654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.632694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.632871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.632894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.633028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.633053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.633235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.633275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.633454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.633495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.633674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.633716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.633868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.633899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.634070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.634099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.634297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.634338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.634446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.634474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.634668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.634710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.634848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.634885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.635091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.635305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.635469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.635651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.635811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.635975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.636153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.636346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.636531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.636749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.636929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.636971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.637133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.637175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.637317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.637345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.637519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.637546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.637732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.637769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.637959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.638013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.638158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.638199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.638383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.638424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.638613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.638655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.638803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.638826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.638981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.639137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.639351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.639539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.639761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.639972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.639997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.640139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.170 [2024-07-22 17:00:20.640180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.170 qpair failed and we were unable to recover it. 00:47:01.170 [2024-07-22 17:00:20.640325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.640365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.640536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.640578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.640722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.640750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.640951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.640978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.641141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.641181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.641334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.641375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.641563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.641604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.641777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.641822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.642961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.642992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.643170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.643211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.643384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.643426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.643603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.643645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.643823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.643845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.644917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.644954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.645155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.645198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.645349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.645392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.645531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.645558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.645758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.645780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.645978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.646003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.646183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.646224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.646423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.646464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.646599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.646627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.646809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.646845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.647024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.647268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.647432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.647654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.647849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.647998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.648195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.648361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.648572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.648726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.648902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.648924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.649097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.649138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.649267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.649308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.649479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.649520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.649684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.649710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.649826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.649848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.650024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.650223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.650440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.650593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.650782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.650980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.651017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.171 [2024-07-22 17:00:20.651213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.171 qpair failed and we were unable to recover it. 00:47:01.171 [2024-07-22 17:00:20.651387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.651429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.651587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.651628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.651777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.651800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.651974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.651999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.652177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.652219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.652348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.652391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.652533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.652561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.652740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.652781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.652921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.652944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.653143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.653186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.653372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.653413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.653585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.653627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.653750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.653788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.653924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.653947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.654096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.654139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.654302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.654342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.654510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.654552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.654694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.654722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.654928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.654951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.655126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.655170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.655327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.655356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.655522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.655550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2967683 Killed "${NVMF_APP[@]}" "$@" 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.655745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.655786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.655923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:47:01.172 [2024-07-22 17:00:20.655946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:47:01.172 [2024-07-22 17:00:20.656154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.656196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:01.172 [2024-07-22 17:00:20.656397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.656439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:01.172 [2024-07-22 17:00:20.656569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.172 [2024-07-22 17:00:20.656598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.656787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.656825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.656979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.657008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.657184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.657226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.657403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.657444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.657586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.657627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.657778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.657801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.657976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.658003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.658212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.658254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.658406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.658446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.658623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.658666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.658827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.658850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.659036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.659065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.659236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.659275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.659446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.659486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.659677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.659719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.659861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.659888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.660023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.660049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.172 [2024-07-22 17:00:20.660203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.172 [2024-07-22 17:00:20.660246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.172 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.660434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.660476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.660632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.660674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.660823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.660846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.660992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.661017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2968339 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:47:01.173 [2024-07-22 17:00:20.661190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.661233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2968339 00:47:01.173 [2024-07-22 17:00:20.661427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.661469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2968339 ']' 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:01.173 [2024-07-22 17:00:20.661642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.661685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:01.173 [2024-07-22 17:00:20.661843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.661866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:01.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:01.173 [2024-07-22 17:00:20.662066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.662110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 17:00:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.173 [2024-07-22 17:00:20.662296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.662339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.662777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.662803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.662977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.663177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.663411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.663574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.663753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.663960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.663999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.664159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.664203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.664331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.664355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.664521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.664544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.664718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.664742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.664898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.664923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.665086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.665113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.665259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.665301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.665466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.665638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.665662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.665819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.665858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.666055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.666082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.666238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.666277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.666416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.666439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.666590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.666614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.666801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.666826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.667025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.667074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.667194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.667219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.667404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.667429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.667597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.667636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.667816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.667841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.668006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.668033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.668194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.668238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.668407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.668450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.668617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.173 [2024-07-22 17:00:20.668642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.173 qpair failed and we were unable to recover it. 00:47:01.173 [2024-07-22 17:00:20.668803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.668828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.669047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.669235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.669436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.669620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.669783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.669979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.670152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.670319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.670476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.670637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.670832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.670857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.671902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.671941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.672090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.672115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.672302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.672326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.672481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.672506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.672654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.672696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.672866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.672890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.673091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.673133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.673301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.673334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.673586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.673628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.673795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.673819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.673954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.673990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.674169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.674211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.674384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.674426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.674593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.674645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.674806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.674829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.674999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.675043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.675219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.675247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.675418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.675447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.675645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.675687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.675852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.675875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.676024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.676066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.676235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.676262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.676465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.676508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.676644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.676668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.676814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.676838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.677038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.677066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.677327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.677369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.677548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.677589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.677733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.677759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.677909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.677933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.174 [2024-07-22 17:00:20.678084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.174 [2024-07-22 17:00:20.678110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.174 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.678282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.678305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.678461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.678484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.678672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.678715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.678869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.678892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.679041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.679085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.679283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.679324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.679467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.679510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.679662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.679705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.679883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.679907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.680090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.680134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.680279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.680321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.680537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.680579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.680758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.680781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.680975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.681002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.681177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.681221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.681384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.681425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.681561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.681589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.681827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.681852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.681990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.682131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.682325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.682558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.682751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.682926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.682973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.683118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.683160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.683278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.683307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.683505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.683547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.683710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.683733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.683904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.683928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.684064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.684089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.684234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.684275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.684450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.684473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.684641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.684682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.684828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.684852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.685028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.685058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.685213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.685241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.685423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.685465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.685659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.685706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.685889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.685914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.686092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.686136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.686279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.686320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.686459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.686487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.686693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.686716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.686832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.686855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.687034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.687062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.687218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.687242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.175 [2024-07-22 17:00:20.687389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.175 [2024-07-22 17:00:20.687413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.175 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.687603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.687642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.687777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.687799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.687928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.687982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.688106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.688131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.688287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.688311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.688469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.688492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.688638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.688663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.688849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.688879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.689072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.689228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.689451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.689648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.689826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.689997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.690023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.690247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.690288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.690411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.690452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.690663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.690686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.690848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.690871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.691092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.691135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.691285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.691326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.691536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.691579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.691748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.691772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.691903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.691927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.692086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.692135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.692290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.692332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.692514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.692557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.692676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.692700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.692921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.692954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.693151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.693192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.693340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.693369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.693522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.693555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.693709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.693733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.693893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.693916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.694100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.694135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.694311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.694351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.694525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.694571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.694725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.694748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.694881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.694904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.695079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.695109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.695272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.695299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.176 [2024-07-22 17:00:20.695500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.176 [2024-07-22 17:00:20.695540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.176 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.695710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.695734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.695960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.696015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.696170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.696211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.696394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.696418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.696603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.696642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.696812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.696834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.697022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.697050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.697337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.697364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.697524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.697552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.697681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.697720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.697879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.697903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.698209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.698236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.698391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.698431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.698575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.698599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.698729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.698752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.698915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.698939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.699100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.699124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.699343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.699366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.699564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.699604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.699779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.699802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.699937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.699980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.700147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.700187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.700355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.700394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.700597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.700636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.700769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.700792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.700951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.700983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.701212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.701238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.701402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.701427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.701598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.701623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.701746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.701776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.701977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.702143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.702300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.702542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.702741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.702922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.702945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.703162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.703188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.703324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.703363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.703559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.703590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.703771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.703795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.703952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.703996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.704251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.704290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.704433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.704473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.704655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.704694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.704868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.704891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.705072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.705098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.177 [2024-07-22 17:00:20.705287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.177 [2024-07-22 17:00:20.705311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.177 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.705449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.705486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.705623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.705647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.705867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.705891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.706118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.706143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.706277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.706301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.706465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.706488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.706710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.706748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.706867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.706891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.707077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.707103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.707255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.707280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.707534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.707558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.707746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.707770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.707991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.708016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.708186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.708210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.708446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.708470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.708640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.708689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.708841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.708864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.709043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.709232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.709402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.709574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.709774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.709998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.710037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.710192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.710216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.710408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.710431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.710611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.710635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.710873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.710897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.711042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.711066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.711198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.711222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.711444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.711477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.711656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.711679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.711831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.711855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.712082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.712114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.712244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.712280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.712470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.712494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.712669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.712693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.712922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.712944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.713122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.713147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.713373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.713404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.713564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.713595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.713725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.713748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.713984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:01.178 [2024-07-22 17:00:20.714019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.714048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.714072] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:01.178 [2024-07-22 17:00:20.714188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.714213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.714361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.178 [2024-07-22 17:00:20.714385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.178 qpair failed and we were unable to recover it. 00:47:01.178 [2024-07-22 17:00:20.714532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.714554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.714683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.714722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.714859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.714884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.715115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.715140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.715311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.715361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.715496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.715520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.715688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.715726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.715878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.715903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.716106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.716131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.716303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.716341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.716490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.716514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.716644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.716669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.716818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.716842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.717946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.717990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.718139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.718166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.718307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.718332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.718464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.718503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.718649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.718674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.718885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.718909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.719906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.719944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.720166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.720191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.720399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.720423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.720619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.720642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.720788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.720813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.720986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.721122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.721305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.721537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.721711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.721944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.721986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.722167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.722191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.722327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.722365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.722512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.722536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.722686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.722711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.722888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.722912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.723073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.723098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.723235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.723274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.179 qpair failed and we were unable to recover it. 00:47:01.179 [2024-07-22 17:00:20.723455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.179 [2024-07-22 17:00:20.723479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.723641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.723665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.723794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.723833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.723981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.724111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.724272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.724463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.724638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.724841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.724864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.725073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.725097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.725221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.725262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.725449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.725473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.725634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.725658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.725831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.725855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.726897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.726921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.727138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.727171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.727328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.727352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.727545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.727569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.727731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.727755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.727923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.727948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.728136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.728161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.728346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.728370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.728521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.728545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.728690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.728728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.728874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.728912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.729058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.729084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.729279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.729305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.729518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.729542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.729734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.729772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.729919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.729942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.730095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.180 [2024-07-22 17:00:20.730135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.180 qpair failed and we were unable to recover it. 00:47:01.180 [2024-07-22 17:00:20.730290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.730314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.730511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.730534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.730706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.730730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.730905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.730929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.731120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.731145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.731333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.731357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.731497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.731520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.731667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.731707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.731867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.731891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.732039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.732079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.732296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.732320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.732457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.732481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.732646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.732684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.732850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.732880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.733076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.733105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.733240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.733264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.733424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.733448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.733705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.733730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.733857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.733881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.734970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.734996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.735176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.735202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.735376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.735400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.735573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.735596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.735759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.735783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.735908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.735948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.736095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.736134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.736276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.736300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.736461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.736500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.736667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.736713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.736881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.736904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.737081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.737106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.737237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.737275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.737517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.737701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.737725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.181 qpair failed and we were unable to recover it. 00:47:01.181 [2024-07-22 17:00:20.737906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.181 [2024-07-22 17:00:20.737930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.738063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.738087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.738266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.738291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.738452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.738475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.738709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.738733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.738879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.738902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.739086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.739111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.739263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.739301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.739434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.739473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.739623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.739648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.739809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.739847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.740830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.740854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.741969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.741994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.742151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.742174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.742329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.742352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.742501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.742539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.742698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.742722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.742907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.742930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.743080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.743104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.743282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.743306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.743448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.743486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.743660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.743684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.743851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.743875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.744045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.744225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.744428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.744625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.744806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.744973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.745000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.745137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.745162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.182 [2024-07-22 17:00:20.745323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.182 [2024-07-22 17:00:20.745346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.182 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.745499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.745523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.745655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.745693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.745805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.745829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.745951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.746193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.746389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.746532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.746707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.746868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.746907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.747051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.747078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.747216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.747240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.747422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.747446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.747665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.747697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.747828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.747852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.748969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.748994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.749142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.749166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.749320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.749344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.749512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.749536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.749708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.749733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.749893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.749916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.750971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.750997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.751130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.751168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.751354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.751379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.751542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.751566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.751714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.751754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.751900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.751938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.752117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.183 [2024-07-22 17:00:20.752142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.183 qpair failed and we were unable to recover it. 00:47:01.183 [2024-07-22 17:00:20.752302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.752327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.752491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.752515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.752637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.752662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.752809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.752834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.753032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.753193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.753394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.753574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.753760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.753984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.754180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.754377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.754562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.754717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.754873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.754897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.755071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.755268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.755467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.755680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.755849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.755999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.756024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.756170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.756209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.756432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.756456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.756607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.756631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.756812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.756835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.756992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.757202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.757359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.757542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.757708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.757870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.757895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.758070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.758095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.758225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.758264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.758414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.758437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.758554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.184 [2024-07-22 17:00:20.758578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.184 qpair failed and we were unable to recover it. 00:47:01.184 [2024-07-22 17:00:20.758736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.758761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.758915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.758939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.759112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.759137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.759293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.759318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.759494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.759519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.759692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.759730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.759902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.759926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.760117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.760268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.760477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.760648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.760837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.760982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.761160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.761355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.761547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.761731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.761900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.761938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.762106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.762132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.762231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.762255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.762465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.762488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.762637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.762676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 EAL: No free 2048 kB hugepages reported on node 1 00:47:01.185 [2024-07-22 17:00:20.762866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.762895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.763013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.763037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.763191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.763217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.763365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.763403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.763646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.763670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.763794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.763818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.764043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.764218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.764427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.764591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.764787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.764962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.765143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.765336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.765544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.765721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.765888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.765927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.766130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.185 [2024-07-22 17:00:20.766156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.185 qpair failed and we were unable to recover it. 00:47:01.185 [2024-07-22 17:00:20.766306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.766331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.766509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.766534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.766710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.766737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.766853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.766879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.767033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.767060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.767197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.767239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.767480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.767505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.767730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.767766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.767923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.767948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.768123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.768148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.768306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.768331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.768505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.768530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.768682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.768706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.768878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.768902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.769085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.769111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.769273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.769298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.769436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.769461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.769697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.769722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.769883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.769908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.770066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.770093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.770287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.770312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.770506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.770531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.770708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.770736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.770922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.770969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.771100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.771125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.771267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.771292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.771461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.771486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.771706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.771735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.771865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.771890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.772041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.772069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.772257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.772282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.772510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.772544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.772727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.772752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.772889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.772914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.773070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.773097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.773268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.773294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.773492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.773517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.773720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.773745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.773909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.773934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.186 [2024-07-22 17:00:20.774070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.186 [2024-07-22 17:00:20.774096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.186 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.774214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.774240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.774418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.774458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.774685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.774710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.774902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.774927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.775063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.775105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.775293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.775317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.775518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.775543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.775689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.775714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.775969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.775995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.776138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.776163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.776378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.776403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.776567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.776604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.776795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.776819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.776991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.777170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.777318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.777487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.777725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.777902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.777927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.778126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.778152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.778330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.778354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.778591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.778616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.778765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.778795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.779043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.779069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.779225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.779251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.779501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.779540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.779662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.779687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.779851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.779876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.780080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.780105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.780379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.780405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.780560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.780584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.780741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.780903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.780943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.781083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.781110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.781263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.781304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.187 qpair failed and we were unable to recover it. 00:47:01.187 [2024-07-22 17:00:20.781524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.187 [2024-07-22 17:00:20.781549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.781704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.781729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.781981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.782180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.782343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.782573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.782758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.782901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.782927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.783092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.783120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.783304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.783346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.783489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.783513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.783800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.783825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.783999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.784025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.784189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.784216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.784395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.784422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.784560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.784600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.784714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.784752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.785029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.785056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.785194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.785221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.785348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.785374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.188 [2024-07-22 17:00:20.785551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.188 [2024-07-22 17:00:20.785577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.188 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.785729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.785755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.468 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.785894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.785920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.468 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.786093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.786120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.468 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.786269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.786305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.468 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.786623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.786649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.468 qpair failed and we were unable to recover it. 00:47:01.468 [2024-07-22 17:00:20.786768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.468 [2024-07-22 17:00:20.786794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.786926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.786956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.787199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.787226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.787357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.787384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.787546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.787582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.787770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.787798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.787972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.787998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.788141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.788170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.788310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.788336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.788510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.788537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.788653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.788689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.788889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.788914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.789108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.789135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.789275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.789300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.789587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.789613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.789770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.789795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.790061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.790224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.790447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.790611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.790770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.790957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.791135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.791306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.791571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.791754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.791926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.791986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.792141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.792167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.792395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.792423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.792618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.792643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.792819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.792844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.792991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.793018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.793206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.793232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.793473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.793498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.793648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.793673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.793807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.793832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.794023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.794050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.469 [2024-07-22 17:00:20.794281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.469 [2024-07-22 17:00:20.794313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.469 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.794540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.794565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.794722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.794746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.794953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.794999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.795113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.795144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.795321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.795361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.795511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.795535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.795769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.795794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.795980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.796007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.796187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.796213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.796404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.796428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.796601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.796626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.796863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.796887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.797041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.797174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.797379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.797556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.797793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.797977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.798006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.798157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.798182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.798408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.798433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.798557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.798582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.798778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.798803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.798999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.799025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.799180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.799217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.799359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.799384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.799575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.799599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.799748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.799774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.799975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.800143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.800320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.800532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.800740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.800920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.800944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.801136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.801162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.801304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.801328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.801444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.801469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.801638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.801677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.470 qpair failed and we were unable to recover it. 00:47:01.470 [2024-07-22 17:00:20.801821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.470 [2024-07-22 17:00:20.801846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.802067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.802105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.802254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.802278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.802492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.802517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.802811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.802835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.803020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.803046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.803210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.803254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.803419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.803443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.803663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.803687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.803869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.803905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.804023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.804049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.804222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.804260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.804416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.804441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.804675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.804702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.804864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.804888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.805114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.805140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.805302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.805341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.805530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.805553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.805711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.805746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.805931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.805970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.806049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:01.471 [2024-07-22 17:00:20.806146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.806171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.806351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.806376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.806591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.806614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.806794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.806818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.807025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.807079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.807244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.807270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.807505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.807542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.807707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.807731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.807895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.807919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.808075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.808100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.808264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.808290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.808460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.808484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.808658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.808697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.808845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.808869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.809060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.809086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.809248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.809287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.809442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.809467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.809664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.471 [2024-07-22 17:00:20.809689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.471 qpair failed and we were unable to recover it. 00:47:01.471 [2024-07-22 17:00:20.809865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.809890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.810016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.810041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.810247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.810273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.810454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.810479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.810631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.810656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.810890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.810914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.811136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.811163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.811329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.811353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.811517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.811542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.811773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.811803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.811999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.812197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.812328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.812540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.812689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.812919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.812944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.813125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.813150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.813342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.813366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.813515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.813540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.813750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.813775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.813990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.814016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.814168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.814199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.814397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.814421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.814598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.814622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.814779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.814804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.814994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.815168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.815393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.815537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.815707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.815923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.815947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.816117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.816144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.816273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.816299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.816436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.816475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.816646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.816670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.816843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.816867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.817100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.817127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.472 [2024-07-22 17:00:20.817272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.472 [2024-07-22 17:00:20.817296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.472 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.817453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.817477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.817707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.817731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.817916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.817941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.818146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.818173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.818293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.818318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.818561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.818587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.818751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.818776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.819027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.819053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.819256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.819280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.819477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.819502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.819658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.819683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.819944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.819998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.820173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.820199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.820305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.820344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.820494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.820534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.820685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.820725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.820933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.820958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.821094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.821119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.821274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.821299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.821467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.821498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.821650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.821675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.821853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.821878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.822073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.822298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.822461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.822619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.822821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.822997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.823024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.823190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.473 [2024-07-22 17:00:20.823225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.473 qpair failed and we were unable to recover it. 00:47:01.473 [2024-07-22 17:00:20.823329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.823368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.823615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.823640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.823794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.823818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.823989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.824016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.824215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.824240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.824383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.824417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.824592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.824616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.824840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.824865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.825040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.825064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.825301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.825325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.825531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.825556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.825752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.825776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.825955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.826013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.826177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.826203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.826342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.826389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.826573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.826609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.826794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.826818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.826976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.827001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.827263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.827287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.827444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.827468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.827613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.827638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.827796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.827835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.828022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.828049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.828209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.828235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.828483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.828509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.828700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.828725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.828915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.828940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.829168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.829328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.829353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.829573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.829598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.829781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.829816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.829967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.829992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.830177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.830202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.830334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.830358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.830480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.830506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.830716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.830750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.830936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.830960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.831119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.474 [2024-07-22 17:00:20.831144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.474 qpair failed and we were unable to recover it. 00:47:01.474 [2024-07-22 17:00:20.831382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.831407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.831558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.831583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.831734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.831759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.831943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.831999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.832146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.832172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.832343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.832383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.832546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.832571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.832756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.832781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.832986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.833175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.833372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.833530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.833758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.833970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.833995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.834139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.834166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.834355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.834381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.834572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.834595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.834779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.834803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.835038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.835064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.835204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.835230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.835406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.835430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.835634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.835659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.835829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.835853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.836027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.836059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.836246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.836286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.836441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.836466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.836590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.836615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.836816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.836840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.837007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.837034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.837166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.837192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.837461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.837501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.837636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.837661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.837782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.837809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.838033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.838059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.838192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.838218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.838411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.838435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.838554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.838592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.475 qpair failed and we were unable to recover it. 00:47:01.475 [2024-07-22 17:00:20.838730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.475 [2024-07-22 17:00:20.838762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.838978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.839186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.839382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.839535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.839714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.839886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.839913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.840060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.840087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.840233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.840258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.840480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.840513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.840662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.840699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.840857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.840882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.841093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.841119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.841316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.841352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.841514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.841538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.841757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.841782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.841904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.841928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.842110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.842136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.842340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.842365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.842551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.842575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.842754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.842779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.843018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.843044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.843247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.843287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.843455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.843479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.843638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.843663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.843816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.843855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.844096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.844128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.844282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.844318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.844514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.844678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.844714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.844904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.844929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.845133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.845159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.845309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.845334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.845579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.845605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.845737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.845762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.845933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.845977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.476 [2024-07-22 17:00:20.846175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.476 [2024-07-22 17:00:20.846201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.476 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.846365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.846404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.846626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.846650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.846836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.846860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.847081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.847279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.847485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.847645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.847834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.847979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.848127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.848305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.848471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.848645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.848828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.848869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.849025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.849052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.849272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.849308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.849470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.849495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.849657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.849682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.849889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.849925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.850059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.850085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.850219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.850258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.850394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.850427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.850592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.850617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.850785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.850823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.851048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.851073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.851214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.851238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.851396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.851421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.851642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.851667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.851831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.851856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.852086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.852116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.852269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.852307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.852523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.852547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.852710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.477 [2024-07-22 17:00:20.852734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.477 qpair failed and we were unable to recover it. 00:47:01.477 [2024-07-22 17:00:20.852903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.852928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.853084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.853110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.853346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.853371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.853521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.853545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.853703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.853742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.853947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.854004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.854195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.854222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.854395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.854421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.854580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.854605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.854859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.854883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.855082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.855107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.855287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.855313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.855533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.855569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.855761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.855786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.855985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.856011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.856146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.856172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.856322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.856362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.856543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.856568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.856775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.856800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.856961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.857007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.857223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.857249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.857410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.857434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.857677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.857701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.857916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.857940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.858081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.858131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.858307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.858333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.858521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.858546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.858722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.858746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.858866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.858904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.859091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.859119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.859267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.859292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.859498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.859523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.859704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.859728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.859943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.859989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.860143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.478 [2024-07-22 17:00:20.860174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.478 qpair failed and we were unable to recover it. 00:47:01.478 [2024-07-22 17:00:20.860415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.860440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.860619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.860650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.860830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.860854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.861008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.861034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.861232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.861273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.861457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.861482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.861618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.861657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.861806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.861845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.862023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.862047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.862250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.862288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.862445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.862469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.862619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.862659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.862901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.862925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.863088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.863114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.863284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.863309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.863495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.863520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.863678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.863717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.863925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.863969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.864141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.864166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.864339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.864364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.864526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.864551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.864710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.864735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.864909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.864942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.865105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.865138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.865261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.865302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.865487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.865511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.865672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.865696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.865875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.865899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.866055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.866096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.866261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.866286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.866474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.866498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.866686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.866709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.866863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.866887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.867004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.867030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.867195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.867220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.867463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.867488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.867631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.867655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.867812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.867851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.868013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.868038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.868179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.868203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.868403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.868428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.868580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.868624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.868816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.868851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.479 [2024-07-22 17:00:20.869013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.479 [2024-07-22 17:00:20.869039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.479 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.869176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.869216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.869459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.869484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.869622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.869647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.869847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.869871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.870044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.870069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.870232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.870256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.870465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.870490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.870688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.870712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.870853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.870877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.871039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.871065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.871214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.871240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.871427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.871452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.871611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.871650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.871827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.871857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.872083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.872254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.872436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.872635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.872817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.872983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.873008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.873234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.873259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.873418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.873442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.873683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.873707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.873857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.873882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.874038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.874078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.874307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.874332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.874488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.874513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.874678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.874702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.874899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.874923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.875093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.875131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.875302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.875462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.875486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.875641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.875667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.875853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.875878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.876032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.876059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.876208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.876248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.876424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.876457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.876646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.876675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.876823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.876847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.877063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.877089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.877258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.877281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.877450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.877474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.877684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.480 [2024-07-22 17:00:20.877709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.480 qpair failed and we were unable to recover it. 00:47:01.480 [2024-07-22 17:00:20.877908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.877932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.878107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.878133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.878254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.878292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.878412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.878453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.878616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.878641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.878838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.878867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.879024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.879049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.879277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.879302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.879443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.879479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.879623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.879660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.879835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.879869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.880059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.880253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.880408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.880602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.880795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.880995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.881167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.881354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.881542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.881755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.881936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.881962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.882147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.882173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.882366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.882390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.882564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.882589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.882810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.882834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.883948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.883980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.884196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.884221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.884385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.884409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.884595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.884623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.884813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.884838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.885015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.885058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.885236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.885262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.885398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.885438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.885639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.885665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.885834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.885859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.886040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.886066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.886229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.886269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.886425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.481 [2024-07-22 17:00:20.886450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.481 qpair failed and we were unable to recover it. 00:47:01.481 [2024-07-22 17:00:20.886607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.886648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.886836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.886861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.887969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.887996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.888176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.888212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.888347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.888372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.888495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.888520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.888650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.888676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.888838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.888863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.889042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.889080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.889223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.889263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.889448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.889473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.889669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.889694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.889872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.889898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.890126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.890308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.890465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.890606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.890796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.890982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.891202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.891354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.891503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.891644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.891842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.891882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.892103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.892129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.892308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.892338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.892509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.892546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.892708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.892732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.892913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.892938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.893097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.893124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.893284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.893310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.893443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.482 [2024-07-22 17:00:20.893482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.482 qpair failed and we were unable to recover it. 00:47:01.482 [2024-07-22 17:00:20.893660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.893700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.893859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.893884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.894069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.894095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.894235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.894287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.894467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.894503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.894699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.894723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.894900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.894925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.895137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.895193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.895391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.895418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.895538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.895564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.895777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.895802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.896026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.896053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.896191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.896217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.896474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.896499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.896647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.896672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.896897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.897066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.897106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.897278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.897305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.897515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.897541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.897699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.897724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.897917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.897957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.898104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.898132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.898331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.898356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.898509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.898534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.898752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.898778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.898904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.898930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.899098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.899138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.899282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.899308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.899545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.899571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.899788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.899814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.900003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.900189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.900216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.900387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.900412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.900593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.900628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.900776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.900802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.900971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.901025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.901202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.901229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.901418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.901444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.901608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.901633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.901813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.901838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.902000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.902027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.902164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.902191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.902359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.902384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.483 [2024-07-22 17:00:20.902533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.483 [2024-07-22 17:00:20.902557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.483 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.902750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.902775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.902915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.902941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.903094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.903135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.903316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.903355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.903540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.903568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.903716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.903742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.903923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.903949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.904168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.904195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.904332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.904357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.904608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.904635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.904787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.904814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.905954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.905991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.906147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.906173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.906330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.906355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.906509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.906534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.906792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.906816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.906993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.907156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.907298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.907479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.907639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.907841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.907866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.908031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.908072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.908221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.908249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.908482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.908514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.908663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.908687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.908835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.908860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.909896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.909922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.910122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.910291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.910520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.910666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.910804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.910971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.484 [2024-07-22 17:00:20.911019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.484 qpair failed and we were unable to recover it. 00:47:01.484 [2024-07-22 17:00:20.911188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.911214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.911346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.911372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.911505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.911530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.911731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.911772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.911912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.911939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.912059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.912085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.912222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.912249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.912342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:01.485 [2024-07-22 17:00:20.912376] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:01.485 [2024-07-22 17:00:20.912391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:01.485 [2024-07-22 17:00:20.912403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:01.485 [2024-07-22 17:00:20.912413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:01.485 [2024-07-22 17:00:20.912410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.912434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.912487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:47:01.485 [2024-07-22 17:00:20.912538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:47:01.485 [2024-07-22 17:00:20.912621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.912650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.912585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:47:01.485 [2024-07-22 17:00:20.912587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:47:01.485 [2024-07-22 17:00:20.912804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.912833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.913897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.913925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.914109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.914136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.914347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.914373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.914520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.914546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.914748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.914773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.914929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.914955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.915136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.915162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.915302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.915328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.915494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.915520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.915680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.915707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.915934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.915961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.916110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.916136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.916326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.916352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.916551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.916577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.916715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.916740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.916884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.916910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.917933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.917972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.918113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.918138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.918291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.918318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.918551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.918578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.918736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.485 [2024-07-22 17:00:20.918763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.485 qpair failed and we were unable to recover it. 00:47:01.485 [2024-07-22 17:00:20.918984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.919143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.919305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.919460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.919628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.919810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.919845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.920075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.920102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.920241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.920267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.920477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.920503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.920679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.920705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.920814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.920841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.921889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.921916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.922144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.922187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.922431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.922459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.922640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.922667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.922777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.922804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.923934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.923960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.924130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.924315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.924487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.924680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.924864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.924998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.925025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.925169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.925195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.925414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.925440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.925572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.925602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.925864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.486 [2024-07-22 17:00:20.925889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.486 qpair failed and we were unable to recover it. 00:47:01.486 [2024-07-22 17:00:20.926065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.926091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.926293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.926318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.926453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.926479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.926692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.926719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.926868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.926894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.927043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.927244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.927401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.927561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.927803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.927974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.928204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.928403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.928592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.928782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.928950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.928983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.929152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.929178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.929379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.929405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.929562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.929588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.929792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.929819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.929952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.929985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.930123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.930148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.930354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.930391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.930562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.930588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.930797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.930823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.930972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.930999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.931130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.931156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.931286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.931312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.931434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.931460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.931590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.931627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.931770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.931797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.932895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.932922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.933146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.933173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.933283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.933313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.933500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.933527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.933711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.933738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.933940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.933971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.934137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.934163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.934298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.487 [2024-07-22 17:00:20.934323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.487 qpair failed and we were unable to recover it. 00:47:01.487 [2024-07-22 17:00:20.934468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.934494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.934671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.934698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.934863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.934889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.935008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.935035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.935207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.935233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.935455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.935481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.935614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.935640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.935774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.935801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.936053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.936194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.936360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.936615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.936778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.936991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.937159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.937316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.937526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.937736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.937915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.937942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.938138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.938164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.938358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.938384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.938526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.938553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.938682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.938708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.938868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.938895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.939078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.939104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.939297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.939323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.939464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.939490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.939650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.939676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.939907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.939933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.940073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.940100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.940261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.940287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.940482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.940508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.940675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.940701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.940831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.940857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.941858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.941884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.942052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.942097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.942311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.942483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.942510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.942652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.488 [2024-07-22 17:00:20.942678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.488 qpair failed and we were unable to recover it. 00:47:01.488 [2024-07-22 17:00:20.942830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.942857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.943862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.943888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.944850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.944876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.945039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.945236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.945424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.945585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.945779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.945969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.946159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.946353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.946517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.946708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8788000b90 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.946843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.946870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.947826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.947988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.948188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.948347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.948537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.948666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.948858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.948884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.949840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.949866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.950008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.950036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.950173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.489 [2024-07-22 17:00:20.950198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.489 qpair failed and we were unable to recover it. 00:47:01.489 [2024-07-22 17:00:20.950335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.950366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.950502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.950528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.950692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.950718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.950851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.950877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.951890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.951916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.952099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.952141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.952334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.952363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.952541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.952568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.952718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.952745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.953009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.953036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.953211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.953238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.953378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.953404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.953570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.953597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.953824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.953850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.954007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.954047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.954193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.954221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.954440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.954477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.954623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.954649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.954791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.954817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.955119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.955146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.955295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.955321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.955493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.955519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.955684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.955715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.955937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.955969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.956113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.956139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.956300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.956326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.490 [2024-07-22 17:00:20.956470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.490 [2024-07-22 17:00:20.956496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.490 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.956648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.956674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.956835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.956861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.957930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.957957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.958119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.958146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.958288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.958314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.958443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.958469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.958606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.958632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.958808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.958834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.959042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.959069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.959222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.959248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.959385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.959411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.959568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.959594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.959728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.959753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.960043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.960230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.960364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.960543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.960726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.960978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.961152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.961336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.961495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.961669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.961831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.961857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.962850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.962877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.963042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.491 [2024-07-22 17:00:20.963068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.491 qpair failed and we were unable to recover it. 00:47:01.491 [2024-07-22 17:00:20.963190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.963217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.963408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.963434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.963540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.963566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.963728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.963754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.963921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.963947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.964915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.964945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.965124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.965153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.965444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.965485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.965668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.965698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.965810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.965836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.965971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.965998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.966159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.966185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.966322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.966348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.966537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.966563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.966697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.966723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.966883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.966909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.967907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.967934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.968951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.968983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.969133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.969159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.492 [2024-07-22 17:00:20.969296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.492 [2024-07-22 17:00:20.969322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.492 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.969542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.969568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.969737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.969762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.969990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.970189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.970351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.970534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.970734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.970865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.970891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.971957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.971988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.972951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.972982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.973926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.973952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.974148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.974306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.974513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.974681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.974837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.974994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.975021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.975185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.975211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.975352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.975378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.975498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.975524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.493 [2024-07-22 17:00:20.975688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.493 [2024-07-22 17:00:20.975714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.493 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.975823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.975849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.976859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.976885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.977079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.977239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.977478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.977669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.977857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.977996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.978169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.978341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.978526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.978661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.978835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.978861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.979832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.979979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.980819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.980975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.981132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.981272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.981437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.981574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.494 [2024-07-22 17:00:20.981733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.494 [2024-07-22 17:00:20.981759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.494 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.981898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.981924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.982928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.982954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.983884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.983910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.984850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.984876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.985855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.985989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.986016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.986163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.986189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.986330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.986356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.495 [2024-07-22 17:00:20.986518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.495 [2024-07-22 17:00:20.986548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.495 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.986713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.986739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.986875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.986900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.987857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.987883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.988877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.988902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.989842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.989986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.990840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.990978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.991161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.991350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.991511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.991652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.991847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.991872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.992044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.992070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.992177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.992203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.992427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.992454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.992600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.496 [2024-07-22 17:00:20.992626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.496 qpair failed and we were unable to recover it. 00:47:01.496 [2024-07-22 17:00:20.992828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.992864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.992975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.993151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.993319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.993539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.993728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.993884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.993910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.994927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.994953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.995098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.995124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.995279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.995305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.995436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.995461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.995697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.995723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.995893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.995919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.996970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.996996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.997184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.997210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.997364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.997401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.997556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.997582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.997761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.997787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.997948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.997981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.998115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.998141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.998310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.998335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.998468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.998501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.998656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.998682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.998845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.998871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.999100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.999126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.999265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.999291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.999413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.999449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.999645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.999670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.497 qpair failed and we were unable to recover it. 00:47:01.497 [2024-07-22 17:00:20.999829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.497 [2024-07-22 17:00:20.999853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.000930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.000956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.001139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.001164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.001298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.001339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.001531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.001557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.001740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.001765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.001950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.001996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.002124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.002150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.002287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.002313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.002483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.002508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.002667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.002693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.002854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.002879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.003066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.003092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.003264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.003289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.003516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.003551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.003710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.003738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.003904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.003929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.004132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.004159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.004330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.004356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.004518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.004544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.004775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.004800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.004928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.004953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.005168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.005327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.005461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.005694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.005860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.005997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.006024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.006169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.006194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.006371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.006397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.006534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.006559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.006719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.006745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.006978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.007004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.498 qpair failed and we were unable to recover it. 00:47:01.498 [2024-07-22 17:00:21.007154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.498 [2024-07-22 17:00:21.007180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.007317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.007343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.007495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.007521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.007679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.007705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.007864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.007890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.008117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.008153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.008320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.008346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.008481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.008507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.008735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.008760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.008909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.008939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.009159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.009336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.009498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.009657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.009843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.009978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.010142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.010339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.010527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.010729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.010891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.010917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.011122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.011149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.011285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.011310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.011472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.011498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.011703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.011730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.011862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.011888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.012026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.012055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.012273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.012299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.012465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.012491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.012690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.012715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.012864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.012890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.013048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.013075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.013266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.013292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.013458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.013483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.013636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.013662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.013801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.499 [2024-07-22 17:00:21.013827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.499 qpair failed and we were unable to recover it. 00:47:01.499 [2024-07-22 17:00:21.014046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.014236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.014392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.014567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.014709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.014931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.014956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.015118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.015146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.015315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.015341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.015469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.015494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.015688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.015713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.015874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.015900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.016875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.016901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.017059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.017086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.017312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.017338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.017477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.017503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.017675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.017701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.017850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.017885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.018039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.018066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.018259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.018285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.018445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.018472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.018663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.018690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.018852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.018878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.019126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.019270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.019417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.019614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.019770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.019988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.020014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.020131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.500 [2024-07-22 17:00:21.020157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.500 qpair failed and we were unable to recover it. 00:47:01.500 [2024-07-22 17:00:21.020391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.020417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.020569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.020595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.020712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.020737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.020884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.020910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.021045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.021071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.021216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.021242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.021447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.021473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.021650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.021679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.021882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.021907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.022102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.022252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.022442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.022628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.022852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.022976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.023002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.023194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.023220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.023357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.023382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.023513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.023538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.023717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.023743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.023961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.024202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.024371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.024537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.024698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.024859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.024885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.025046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.025269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.025462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.025623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.025832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.025974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.026001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.026209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.026235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.026408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.026434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.026655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.026681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.026827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.026857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.026993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.027020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.027179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.027205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.027342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.501 [2024-07-22 17:00:21.027367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.501 qpair failed and we were unable to recover it. 00:47:01.501 [2024-07-22 17:00:21.027485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.027510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.027669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.027695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.027838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.027864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.028906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.028932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.029085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.029111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.029255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.029281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.029415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.029441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.029606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.029632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.029792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.029817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.030903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.030929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.031074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.031101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.031268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.031294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.031520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.031555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.031727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.031753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.031920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.031946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.032970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.032996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.033133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.033159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.033320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.033346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 [2024-07-22 17:00:21.033495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.033522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:01.502 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:47:01.502 [2024-07-22 17:00:21.033719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.033746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:01.502 [2024-07-22 17:00:21.033907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.033934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 qpair failed and we were unable to recover it. 00:47:01.502 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:01.502 [2024-07-22 17:00:21.034074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.502 [2024-07-22 17:00:21.034102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.502 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.034237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.034264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.034369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.034394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.034600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.034627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.034799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.034825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.034973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.035899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.035924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.036867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.036892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.037857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.037997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.038154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.038320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.038495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.038657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.038841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.038867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.039905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.039931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.040052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.040078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.503 [2024-07-22 17:00:21.040190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.503 [2024-07-22 17:00:21.040215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.503 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.040328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.040355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.040499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.040525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.040687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.040713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.040840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.040866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.041862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.041998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.042934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.042960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.043860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.043886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.044817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.044843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.045023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.045049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.045158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.045184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.504 qpair failed and we were unable to recover it. 00:47:01.504 [2024-07-22 17:00:21.045292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.504 [2024-07-22 17:00:21.045319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.045453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.045478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.045608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.045634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.045771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.045796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.045930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.045956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.046836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.046862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.047863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.047997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.048837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.048974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.049928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.049954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.050842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.050868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.051010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.051037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.051152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.051177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.505 [2024-07-22 17:00:21.051293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.505 [2024-07-22 17:00:21.051318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.505 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.051427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.051453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:01.506 [2024-07-22 17:00:21.051621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.051647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:01.506 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.506 [2024-07-22 17:00:21.051813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.051839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.506 [2024-07-22 17:00:21.051955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.051986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.052956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.052986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.053919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.053944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.054901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.054927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.055928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.055954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.056902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.056928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.057048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.057075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.057192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.506 [2024-07-22 17:00:21.057218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.506 qpair failed and we were unable to recover it. 00:47:01.506 [2024-07-22 17:00:21.057352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.057377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.057599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.057638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.057751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.057776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.057949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.057981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.058900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.058926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.059910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.059936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.060895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.060921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.061867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.061893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.062828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.062853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.063013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.063039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.063150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.063176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.063333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.063359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.063493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.063519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.507 qpair failed and we were unable to recover it. 00:47:01.507 [2024-07-22 17:00:21.063657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.507 [2024-07-22 17:00:21.063682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.063843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.063869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.063978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.064937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.064969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.065913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.065938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.066944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.066979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.067123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.067275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.067440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.067630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.067828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.067962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.068955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.068988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.069138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.069267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.069433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.069573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.508 [2024-07-22 17:00:21.069711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.508 qpair failed and we were unable to recover it. 00:47:01.508 [2024-07-22 17:00:21.069847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.069872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.070823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.070863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.071846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.071871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.072867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.072892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.073037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.073181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.073400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.073548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 Malloc0 00:47:01.509 [2024-07-22 17:00:21.073750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.073933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.073960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.509 [2024-07-22 17:00:21.074110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:47:01.509 [2024-07-22 17:00:21.074251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.509 [2024-07-22 17:00:21.074403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.509 [2024-07-22 17:00:21.074564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.074770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.074932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.074958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.509 [2024-07-22 17:00:21.075102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.509 [2024-07-22 17:00:21.075129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.509 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.075274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.075299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.075422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.075447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.075569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.075594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.075754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.075779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.075958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.076147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.076332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.076498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.076632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.076820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.076848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.077064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.077202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.077399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:01.510 [2024-07-22 17:00:21.077427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.077618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.077804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.077991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.078861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.078989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.079936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.079961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.080107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.080247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.080478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.080653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.080824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.080984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.081120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.081263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.081454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.081672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.081883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.081909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.082077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.082103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.510 [2024-07-22 17:00:21.082215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.510 [2024-07-22 17:00:21.082241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.510 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.082375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.082401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.082536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.082562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.082697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.082723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.082832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.082858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.083897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.083922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.084099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.084260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.084482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.084651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.084835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.084991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.085151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.085306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.085573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.511 [2024-07-22 17:00:21.085736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:01.511 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.511 [2024-07-22 17:00:21.085970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.085996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.511 [2024-07-22 17:00:21.086141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.086166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.086280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.086306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.086470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.086496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.086657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.086686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.086827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.086853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.087888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.087914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.088071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.088239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.088388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.088621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.088805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.088977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.089004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.089147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.511 [2024-07-22 17:00:21.089172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.511 qpair failed and we were unable to recover it. 00:47:01.511 [2024-07-22 17:00:21.089334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.089360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.089497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.089523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.089665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.089691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.089818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.089844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.089956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.089987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.090119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.090144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.090280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.090306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.090501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.090526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.090659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.090684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.090818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.090844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.091949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.091981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.092153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.092336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.092499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.092639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.092799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.092952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.093160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.093331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.093500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.512 [2024-07-22 17:00:21.093666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:01.512 [2024-07-22 17:00:21.093813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.093839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8778000b90 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.512 [2024-07-22 17:00:21.094008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.512 [2024-07-22 17:00:21.094037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.094185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.094211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.094362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.094398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.094588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.094619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.094743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.094769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.094909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.094934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.095933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.095960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.512 [2024-07-22 17:00:21.096109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.512 [2024-07-22 17:00:21.096135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.512 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.096325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.096373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.096544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.096587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.096735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.096784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.097057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.097252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.097433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.097626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.097839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.097980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.098008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.098145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.772 [2024-07-22 17:00:21.098176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.772 qpair failed and we were unable to recover it. 00:47:01.772 [2024-07-22 17:00:21.098350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.098398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.098512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.098540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.098682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.098708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.098876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.098902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.099895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.099927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.100103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.100271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.100447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.100619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.100817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.100983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.101011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.101183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.101210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.101370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.101396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.101508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.101534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.101669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.773 [2024-07-22 17:00:21.101695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:01.773 [2024-07-22 17:00:21.101854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.101881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.773 [2024-07-22 17:00:21.102053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.773 [2024-07-22 17:00:21.102080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.102226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.102252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.102421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.102448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.102585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.102612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.102756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.102788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.102920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.102947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.103197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.103237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.103353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.103380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.103517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.103543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.103714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.103740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.103880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.773 [2024-07-22 17:00:21.103906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.773 qpair failed and we were unable to recover it. 00:47:01.773 [2024-07-22 17:00:21.104080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c570 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.104222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.104364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.104583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.104743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.104907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.104933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.105073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.105099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.105242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.105269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.105476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:47:01.774 [2024-07-22 17:00:21.105502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8780000b90 with addr=10.0.0.2, port=4420 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.105633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:01.774 [2024-07-22 17:00:21.108106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.108272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.108299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.108315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.108330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.108365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:01.774 17:00:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2967812 00:47:01.774 [2024-07-22 17:00:21.118006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.118127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.118155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.118170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.118184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.118216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.128063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.128178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.128205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.128221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.128235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.128287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.138048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.138165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.138192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.138207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.138222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.138253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.148019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.148141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.148168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.148183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.148198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.148229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.158076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.158186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.158213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.158229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.158243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.158290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.168061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.168169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.168197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.168212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.168226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.168258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.178076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.178218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.178259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.178275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.178288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.178318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.188077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.774 [2024-07-22 17:00:21.188194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.774 [2024-07-22 17:00:21.188221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.774 [2024-07-22 17:00:21.188237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.774 [2024-07-22 17:00:21.188251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.774 [2024-07-22 17:00:21.188308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.774 qpair failed and we were unable to recover it. 00:47:01.774 [2024-07-22 17:00:21.198083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.198200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.198226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.198257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.198272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.198303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.208231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.208376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.208402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.208417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.208429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.208470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.218264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.218393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.218418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.218433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.218453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.218484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.228195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.228306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.228332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.228347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.228362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.228393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.238250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.238377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.238402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.238417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.238431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.238461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.248314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.248469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.248496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.248511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.248535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.248566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.258311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.258465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.258491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.258506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.258520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.258560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.268372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.268495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.268522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.268537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.268551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.268581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.278376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.278482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.278507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.278522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.278535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.278564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.288386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.288516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.288543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.288557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.288571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.288601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.298419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.298550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.298575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.298591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.298605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.298647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.308454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.308565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.308591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.308613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.308628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.308659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.318489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.318596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.318622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.775 [2024-07-22 17:00:21.318637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.775 [2024-07-22 17:00:21.318651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.775 [2024-07-22 17:00:21.318683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.775 qpair failed and we were unable to recover it. 00:47:01.775 [2024-07-22 17:00:21.328603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.775 [2024-07-22 17:00:21.328705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.775 [2024-07-22 17:00:21.328731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.328746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.328759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.328789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.338523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.338654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.338679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.338693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.338707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.338746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.348589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.348695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.348722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.348737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.348751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.348782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.358688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.358794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.358820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.358837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.358849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.358878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.368653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.368768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.368793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.368809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.368822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.368852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.378656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.378767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.378793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.378807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.378821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.378853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.388699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.388809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.388836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.388851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.388865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.388895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.398734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.398841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.398871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.398887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.398900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.398930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.408756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.408859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.408884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.408899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.408913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.408957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:01.776 [2024-07-22 17:00:21.418828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:01.776 [2024-07-22 17:00:21.418975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:01.776 [2024-07-22 17:00:21.419001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:01.776 [2024-07-22 17:00:21.419017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:01.776 [2024-07-22 17:00:21.419032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:01.776 [2024-07-22 17:00:21.419072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:01.776 qpair failed and we were unable to recover it. 00:47:02.034 [2024-07-22 17:00:21.428783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.034 [2024-07-22 17:00:21.428896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.034 [2024-07-22 17:00:21.428922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.428937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.428959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.428997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.438895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.439023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.439050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.439065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.439089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.439126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.448890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.449022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.449049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.449064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.449078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.449109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.458890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.459022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.459049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.459064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.459078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.459110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.468861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.468992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.469020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.469036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.469050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.469081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.478924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.479058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.479084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.479100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.479113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.479155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.488960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.489080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.489111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.489128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.489142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.489177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.499054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.499168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.499202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.499217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.499231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.499274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.509027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.509144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.509171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.509186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.509200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.509231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.519051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.519170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.519196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.519211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.519226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.519257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.529067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.529187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.529213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.529228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.529243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.529295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.539124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.539240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.539279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.539294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.539308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.539349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.549152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.549259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.549286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.549301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.549330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.549368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.559153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.559283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.559309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.559323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.559337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.559367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.569174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.569294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.569320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.569335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.569349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.569379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.579296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.579426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.579457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.579473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.579487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.579526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.589294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.589410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.589435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.589449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.589463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.589493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.599299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.599446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.599471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.599485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.599498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.599531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.609319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.609435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.609460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.609475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.609489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.609519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.619331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.619438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.619464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.035 [2024-07-22 17:00:21.619479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.035 [2024-07-22 17:00:21.619498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.035 [2024-07-22 17:00:21.619529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.035 qpair failed and we were unable to recover it. 00:47:02.035 [2024-07-22 17:00:21.629413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.035 [2024-07-22 17:00:21.629528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.035 [2024-07-22 17:00:21.629554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.629569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.629583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.629614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.036 [2024-07-22 17:00:21.639417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.036 [2024-07-22 17:00:21.639522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.036 [2024-07-22 17:00:21.639547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.639562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.639579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.639618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.036 [2024-07-22 17:00:21.649479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.036 [2024-07-22 17:00:21.649601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.036 [2024-07-22 17:00:21.649626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.649640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.649654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.649684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.036 [2024-07-22 17:00:21.659544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.036 [2024-07-22 17:00:21.659674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.036 [2024-07-22 17:00:21.659699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.659714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.659728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.659758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.036 [2024-07-22 17:00:21.669497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.036 [2024-07-22 17:00:21.669607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.036 [2024-07-22 17:00:21.669632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.669647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.669659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.669689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.036 [2024-07-22 17:00:21.679519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.036 [2024-07-22 17:00:21.679628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.036 [2024-07-22 17:00:21.679654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.036 [2024-07-22 17:00:21.679684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.036 [2024-07-22 17:00:21.679698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.036 [2024-07-22 17:00:21.679730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.036 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.689557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.689664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.689690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.689704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.689719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.689748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.699587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.699710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.699735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.699750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.699764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.699795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.709605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.709710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.709735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.709756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.709771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.709801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.719627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.719737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.719761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.719775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.719788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.719817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.729679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.729789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.729815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.729830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.729844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.729874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.739716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.739839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.739865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.739880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.739894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.739924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.749719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.749822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.749848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.749863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.749878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.749907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.759727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.759839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.759863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.759878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.759891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.759922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.769816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.769923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.769970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.769989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.770013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.770045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.779892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.780066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.780094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.780110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.780124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.780157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.789829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.789940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.789990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.790006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.790020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.790053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.799873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.800004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.800029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.800051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.800065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.800097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.809979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.810127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.810154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.810170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.810184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.810216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.819961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.820104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.820130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.820146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.820160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.820191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.829959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.830091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.830118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.830133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.830147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.830179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.840010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.840118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.840153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.840169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.840183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.840215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.850004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.850117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.850141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.850157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.850171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.850203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.860107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.860294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.860320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.860336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.860349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.860381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.870112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.870270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.870297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.870312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.870325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.870356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.880117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.880301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.880328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.294 [2024-07-22 17:00:21.880344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.294 [2024-07-22 17:00:21.880357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.294 [2024-07-22 17:00:21.880388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.294 qpair failed and we were unable to recover it. 00:47:02.294 [2024-07-22 17:00:21.890160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.294 [2024-07-22 17:00:21.890290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.294 [2024-07-22 17:00:21.890321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.890338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.890351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.890382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.295 [2024-07-22 17:00:21.900193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.295 [2024-07-22 17:00:21.900319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.295 [2024-07-22 17:00:21.900346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.900361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.900375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.900405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.295 [2024-07-22 17:00:21.910202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.295 [2024-07-22 17:00:21.910317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.295 [2024-07-22 17:00:21.910343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.910358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.910371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.910403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.295 [2024-07-22 17:00:21.920423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.295 [2024-07-22 17:00:21.920571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.295 [2024-07-22 17:00:21.920597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.920613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.920627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.920657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.295 [2024-07-22 17:00:21.930326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.295 [2024-07-22 17:00:21.930433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.295 [2024-07-22 17:00:21.930457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.930472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.930486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.930522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.295 [2024-07-22 17:00:21.940382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.295 [2024-07-22 17:00:21.940528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.295 [2024-07-22 17:00:21.940555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.295 [2024-07-22 17:00:21.940571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.295 [2024-07-22 17:00:21.940584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.295 [2024-07-22 17:00:21.940615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.295 qpair failed and we were unable to recover it. 00:47:02.553 [2024-07-22 17:00:21.950365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.553 [2024-07-22 17:00:21.950477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.553 [2024-07-22 17:00:21.950505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.553 [2024-07-22 17:00:21.950521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.553 [2024-07-22 17:00:21.950534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.553 [2024-07-22 17:00:21.950565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.553 qpair failed and we were unable to recover it. 00:47:02.553 [2024-07-22 17:00:21.960357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.553 [2024-07-22 17:00:21.960468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.553 [2024-07-22 17:00:21.960493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.553 [2024-07-22 17:00:21.960509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.553 [2024-07-22 17:00:21.960522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.553 [2024-07-22 17:00:21.960553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.553 qpair failed and we were unable to recover it. 00:47:02.553 [2024-07-22 17:00:21.970365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.553 [2024-07-22 17:00:21.970471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.553 [2024-07-22 17:00:21.970496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.553 [2024-07-22 17:00:21.970510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.553 [2024-07-22 17:00:21.970523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.553 [2024-07-22 17:00:21.970554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.553 qpair failed and we were unable to recover it. 00:47:02.553 [2024-07-22 17:00:21.980425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.553 [2024-07-22 17:00:21.980538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.553 [2024-07-22 17:00:21.980573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.553 [2024-07-22 17:00:21.980589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.553 [2024-07-22 17:00:21.980603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.553 [2024-07-22 17:00:21.980634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.553 qpair failed and we were unable to recover it. 00:47:02.553 [2024-07-22 17:00:21.990448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.553 [2024-07-22 17:00:21.990611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.553 [2024-07-22 17:00:21.990637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.553 [2024-07-22 17:00:21.990653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.553 [2024-07-22 17:00:21.990667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.554 [2024-07-22 17:00:21.990697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.000468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.000623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.000650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.000664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.000678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.554 [2024-07-22 17:00:22.000708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.010499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.010607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.010632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.010648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.010662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8780000b90 00:47:02.554 [2024-07-22 17:00:22.010693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.020551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.020664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.020695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.020711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.020730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.020761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.030556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.030665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.030693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.030708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.030721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.030749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.040635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.040764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.040791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.040806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.040819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.040847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.050630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.050771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.050798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.050813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.050826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.050853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.060622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.060733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.060768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.060783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.060797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.060827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.070646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.070761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.070788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.070804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.070817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.070845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.080696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.080830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.080858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.080873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.080887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.080915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.090718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.090829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.090853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.090868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.090881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.090910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.100765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.100875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.100899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.100914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.100927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.100978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.110771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.110885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.110911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.110925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.110943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.110998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.554 [2024-07-22 17:00:22.120832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.554 [2024-07-22 17:00:22.121001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.554 [2024-07-22 17:00:22.121030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.554 [2024-07-22 17:00:22.121046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.554 [2024-07-22 17:00:22.121059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.554 [2024-07-22 17:00:22.121089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.554 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.130812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.130928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.130977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.130994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.131007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.131039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.140924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.141060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.141088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.141104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.141117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.141146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.150876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.151012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.151040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.151056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.151070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.151099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.160908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.161042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.161070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.161085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.161099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.161130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.171016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.171138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.171165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.171181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.171195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.171225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.181003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.181124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.181151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.181167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.181181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.181211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.191026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.191152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.191179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.191195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.191209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.191238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.555 [2024-07-22 17:00:22.201110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.555 [2024-07-22 17:00:22.201234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.555 [2024-07-22 17:00:22.201279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.555 [2024-07-22 17:00:22.201301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.555 [2024-07-22 17:00:22.201316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.555 [2024-07-22 17:00:22.201346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.555 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.211046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.211153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.211181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.211198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.211211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.211242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.221185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.221318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.221346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.221361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.221374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.221402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.231128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.231244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.231287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.231303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.231316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.231345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.241166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.241296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.241322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.241337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.241350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.241379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.251227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.251369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.251396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.251411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.251425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.251453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.261274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.261411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.261437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.261453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.261466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.261495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.271299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.271414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.271441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.271457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.814 [2024-07-22 17:00:22.271470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.814 [2024-07-22 17:00:22.271498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.814 qpair failed and we were unable to recover it. 00:47:02.814 [2024-07-22 17:00:22.281265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.814 [2024-07-22 17:00:22.281389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.814 [2024-07-22 17:00:22.281415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.814 [2024-07-22 17:00:22.281430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.281444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.281472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.291317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.291428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.291455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.291475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.291490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.291519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.301299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.301413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.301437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.301453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.301466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.301495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.311358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.311467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.311495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.311510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.311523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.311551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.321426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.321547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.321573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.321589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.321603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.321632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.331397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.331503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.331527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.331542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.331555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.331586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.341430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.341541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.341567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.341582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.341594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.341623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.351483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.351629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.351656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.351671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.351685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.351714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.361480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.361589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.361614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.361629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.361642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.361671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.371480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.371587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.371613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.371628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.371640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.371668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.381526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.381695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.381724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.381745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.381760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.381791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.391557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.391664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.391690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.391705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.391718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.391747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.401659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.401781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.401806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.401821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.401834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.401862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.815 qpair failed and we were unable to recover it. 00:47:02.815 [2024-07-22 17:00:22.411570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.815 [2024-07-22 17:00:22.411673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.815 [2024-07-22 17:00:22.411698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.815 [2024-07-22 17:00:22.411712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.815 [2024-07-22 17:00:22.411725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.815 [2024-07-22 17:00:22.411753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:02.816 [2024-07-22 17:00:22.421638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.816 [2024-07-22 17:00:22.421761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.816 [2024-07-22 17:00:22.421786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.816 [2024-07-22 17:00:22.421801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.816 [2024-07-22 17:00:22.421814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.816 [2024-07-22 17:00:22.421843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:02.816 [2024-07-22 17:00:22.431634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.816 [2024-07-22 17:00:22.431743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.816 [2024-07-22 17:00:22.431769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.816 [2024-07-22 17:00:22.431783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.816 [2024-07-22 17:00:22.431796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.816 [2024-07-22 17:00:22.431825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:02.816 [2024-07-22 17:00:22.441667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.816 [2024-07-22 17:00:22.441799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.816 [2024-07-22 17:00:22.441825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.816 [2024-07-22 17:00:22.441840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.816 [2024-07-22 17:00:22.441853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.816 [2024-07-22 17:00:22.441881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:02.816 [2024-07-22 17:00:22.451747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.816 [2024-07-22 17:00:22.451858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.816 [2024-07-22 17:00:22.451884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.816 [2024-07-22 17:00:22.451899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.816 [2024-07-22 17:00:22.451913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.816 [2024-07-22 17:00:22.451942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:02.816 [2024-07-22 17:00:22.461811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:02.816 [2024-07-22 17:00:22.461949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:02.816 [2024-07-22 17:00:22.461990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:02.816 [2024-07-22 17:00:22.462008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:02.816 [2024-07-22 17:00:22.462023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:02.816 [2024-07-22 17:00:22.462055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:02.816 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.471771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.471880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.471913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.471930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.471944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.471982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.481802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.481928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.481954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.481977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.481992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.482021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.491826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.491954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.491987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.492003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.492017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.492047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.501922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.502111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.502137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.502152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.502165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.502194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.511904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.512094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.512121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.512136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.512149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.512178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.521970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.522089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.522115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.522130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.522143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.522173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.532013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.532127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.532154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.532168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.532181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.532210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.541994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.542151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.542177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.542192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.542205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.542234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.552002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.552114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.552139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.552154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.552167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.552195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.562025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.562150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.562181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.562196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.562209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.562239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.572086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.572239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.572264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.572278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.572292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.572321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.582182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.582314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.582340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.582355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.582369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.582397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.592142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.592308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.592334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.592349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.592362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.592390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.602229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.602368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.602394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.602408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.602421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.602456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.612297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.612420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.612446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.612461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.612474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.612502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.622222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.622369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.622394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.622409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.622422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.622451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.632209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.632325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.632351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.632364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.632378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.632416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.642249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.642384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.642410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.642424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.642437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.642467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.652335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.652460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.652497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.652512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.652525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.652553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.662372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.662500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.662525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.662539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.662552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.662581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.672410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.672535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.082 [2024-07-22 17:00:22.672561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.082 [2024-07-22 17:00:22.672575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.082 [2024-07-22 17:00:22.672587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.082 [2024-07-22 17:00:22.672615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.082 qpair failed and we were unable to recover it. 00:47:03.082 [2024-07-22 17:00:22.682538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.082 [2024-07-22 17:00:22.682645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.083 [2024-07-22 17:00:22.682670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.083 [2024-07-22 17:00:22.682684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.083 [2024-07-22 17:00:22.682699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.083 [2024-07-22 17:00:22.682727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.083 qpair failed and we were unable to recover it. 00:47:03.083 [2024-07-22 17:00:22.692418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.083 [2024-07-22 17:00:22.692556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.083 [2024-07-22 17:00:22.692592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.083 [2024-07-22 17:00:22.692607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.083 [2024-07-22 17:00:22.692620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.083 [2024-07-22 17:00:22.692653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.083 qpair failed and we were unable to recover it. 00:47:03.083 [2024-07-22 17:00:22.702474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.083 [2024-07-22 17:00:22.702601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.083 [2024-07-22 17:00:22.702627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.083 [2024-07-22 17:00:22.702645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.083 [2024-07-22 17:00:22.702658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.083 [2024-07-22 17:00:22.702687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.083 qpair failed and we were unable to recover it. 00:47:03.083 [2024-07-22 17:00:22.712465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.083 [2024-07-22 17:00:22.712571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.083 [2024-07-22 17:00:22.712597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.083 [2024-07-22 17:00:22.712611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.083 [2024-07-22 17:00:22.712624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.083 [2024-07-22 17:00:22.712651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.083 qpair failed and we were unable to recover it. 00:47:03.083 [2024-07-22 17:00:22.722501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.083 [2024-07-22 17:00:22.722623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.083 [2024-07-22 17:00:22.722650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.083 [2024-07-22 17:00:22.722664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.083 [2024-07-22 17:00:22.722676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.083 [2024-07-22 17:00:22.722705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.083 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.732534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.732658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.732687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.732702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.732720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.732748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.742583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.742701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.742733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.742760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.742773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.742802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.752589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.752696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.752722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.752737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.752753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.752781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.762585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.762689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.762716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.762731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.762743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.762772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.772606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.772712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.772738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.772753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.772766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.772794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.782663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.782770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.782795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.782809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.782827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.782856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.792703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.792831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.792858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.792872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.792885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.792913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.802702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.802805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.802831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.802847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.802859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.802887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.812779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.812928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.812978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.812995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.813008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.813037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.341 [2024-07-22 17:00:22.822750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.341 [2024-07-22 17:00:22.822869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.341 [2024-07-22 17:00:22.822895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.341 [2024-07-22 17:00:22.822909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.341 [2024-07-22 17:00:22.822922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.341 [2024-07-22 17:00:22.822974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.341 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.832776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.832936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.832985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.833002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.833015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.833044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.842818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.842983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.843010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.843025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.843038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.843068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.852855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.853004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.853031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.853046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.853060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.853089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.862885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.863020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.863047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.863062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.863075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.863105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.872900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.873068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.873094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.873109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.873128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.873158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.882916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.883068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.883094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.883109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.883122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.883152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.892958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.893110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.893136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.893151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.893164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.893193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.903073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.903192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.903219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.903234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.903246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.903290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.913030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.913147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.913172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.913188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.913201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.913230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.923055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.923171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.923197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.923212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.923225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.923254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.933179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.933307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.933334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.933349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.933362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.933391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.943209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.943342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.943368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.943383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.943396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.943424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.953146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.953279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.342 [2024-07-22 17:00:22.953305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.342 [2024-07-22 17:00:22.953319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.342 [2024-07-22 17:00:22.953332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.342 [2024-07-22 17:00:22.953360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.342 qpair failed and we were unable to recover it. 00:47:03.342 [2024-07-22 17:00:22.963224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.342 [2024-07-22 17:00:22.963346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.343 [2024-07-22 17:00:22.963372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.343 [2024-07-22 17:00:22.963386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.343 [2024-07-22 17:00:22.963404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.343 [2024-07-22 17:00:22.963434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.343 qpair failed and we were unable to recover it. 00:47:03.343 [2024-07-22 17:00:22.973311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.343 [2024-07-22 17:00:22.973424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.343 [2024-07-22 17:00:22.973450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.343 [2024-07-22 17:00:22.973464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.343 [2024-07-22 17:00:22.973477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.343 [2024-07-22 17:00:22.973505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.343 qpair failed and we were unable to recover it. 00:47:03.343 [2024-07-22 17:00:22.983322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.343 [2024-07-22 17:00:22.983454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.343 [2024-07-22 17:00:22.983479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.343 [2024-07-22 17:00:22.983494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.343 [2024-07-22 17:00:22.983507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.343 [2024-07-22 17:00:22.983534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.343 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:22.993285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:22.993413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:22.993441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:22.993472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:22.993485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:22.993515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.003390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.003496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.003524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.003539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.003552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.003581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.013350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.013465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.013491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.013506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.013520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.013548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.023399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.023507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.023532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.023546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.023559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.023587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.033410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.033560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.033585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.033600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.033613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.033641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.043469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.043576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.043602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.043616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.043629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.043658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.053446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.053546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.053572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.053592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.053605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.053634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.063488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.063597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.063622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.063637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.063649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.063677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.073500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.073606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.073632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.073647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.073660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.073688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.083623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.083764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.083790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.083804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.083817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.083845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.601 [2024-07-22 17:00:23.093535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.601 [2024-07-22 17:00:23.093635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.601 [2024-07-22 17:00:23.093660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.601 [2024-07-22 17:00:23.093675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.601 [2024-07-22 17:00:23.093688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.601 [2024-07-22 17:00:23.093716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.601 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.103583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.103698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.103724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.103738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.103751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.103779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.113691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.113794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.113820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.113834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.113846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.113874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.123629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.123743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.123768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.123782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.123795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.123823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.133698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.133851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.133877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.133891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.133904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.133932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.143688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.143799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.143824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.143843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.143856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.143885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.153783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.153900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.153925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.153939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.153975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.154006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.163856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.164020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.164046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.164061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.164074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.164104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.173798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.173912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.173937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.173974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.173990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.174019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.183807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.183972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.183998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.184014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.184027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.184056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.193831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.193937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.193986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.194002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.194015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.194045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.203876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.204024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.204050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.204065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.204078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.204108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.213984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.214099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.214126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.214141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.214154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.214184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.602 [2024-07-22 17:00:23.224024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.602 [2024-07-22 17:00:23.224149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.602 [2024-07-22 17:00:23.224176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.602 [2024-07-22 17:00:23.224191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.602 [2024-07-22 17:00:23.224204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.602 [2024-07-22 17:00:23.224233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.602 qpair failed and we were unable to recover it. 00:47:03.603 [2024-07-22 17:00:23.233961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.603 [2024-07-22 17:00:23.234119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.603 [2024-07-22 17:00:23.234151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.603 [2024-07-22 17:00:23.234166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.603 [2024-07-22 17:00:23.234180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.603 [2024-07-22 17:00:23.234209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.603 qpair failed and we were unable to recover it. 00:47:03.603 [2024-07-22 17:00:23.243996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.603 [2024-07-22 17:00:23.244108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.603 [2024-07-22 17:00:23.244135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.603 [2024-07-22 17:00:23.244150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.603 [2024-07-22 17:00:23.244163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.603 [2024-07-22 17:00:23.244192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.603 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.254030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.254145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.254174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.254192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.254207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.254239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.264123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.264286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.264313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.264329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.264341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.264371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.274103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.274227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.274268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.274283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.274295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.274325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.284106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.284279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.284305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.284320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.284332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.284361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.294121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.294232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.294273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.294289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.294301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.294330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.304177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.304317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.304343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.304358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.304370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.304398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.314301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.314445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.314470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.314485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.314498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.314526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.324240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.324412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.324442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.324458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.324471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.324499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.862 qpair failed and we were unable to recover it. 00:47:03.862 [2024-07-22 17:00:23.334234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.862 [2024-07-22 17:00:23.334361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.862 [2024-07-22 17:00:23.334387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.862 [2024-07-22 17:00:23.334402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.862 [2024-07-22 17:00:23.334414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.862 [2024-07-22 17:00:23.334442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.344314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.344461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.344487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.344502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.344515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.344544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.354295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.354417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.354442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.354457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.354470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.354498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.364314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.364430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.364456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.364471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.364484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.364517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.374371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.374530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.374555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.374570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.374583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.374611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.384375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.384486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.384511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.384526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.384539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.384567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.394411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.394521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.394545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.394560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.394573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.394600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.404436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.404538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.404564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.404579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.404592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.404620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.414466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.414592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.414621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.414637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.414650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.414678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.424500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.424649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.424674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.424688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.424701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.424730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.434540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.434646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.434672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.434687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.434699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.434727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.444636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.444745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.444770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.444784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.444797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.444825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.454579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.454683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.454708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.454723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.454735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.454768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.863 qpair failed and we were unable to recover it. 00:47:03.863 [2024-07-22 17:00:23.464596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.863 [2024-07-22 17:00:23.464722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.863 [2024-07-22 17:00:23.464747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.863 [2024-07-22 17:00:23.464762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.863 [2024-07-22 17:00:23.464774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.863 [2024-07-22 17:00:23.464802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.864 qpair failed and we were unable to recover it. 00:47:03.864 [2024-07-22 17:00:23.474605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.864 [2024-07-22 17:00:23.474712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.864 [2024-07-22 17:00:23.474738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.864 [2024-07-22 17:00:23.474752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.864 [2024-07-22 17:00:23.474764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.864 [2024-07-22 17:00:23.474792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.864 qpair failed and we were unable to recover it. 00:47:03.864 [2024-07-22 17:00:23.484745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.864 [2024-07-22 17:00:23.484853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.864 [2024-07-22 17:00:23.484878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.864 [2024-07-22 17:00:23.484892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.864 [2024-07-22 17:00:23.484905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.864 [2024-07-22 17:00:23.484933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.864 qpair failed and we were unable to recover it. 00:47:03.864 [2024-07-22 17:00:23.494673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.864 [2024-07-22 17:00:23.494824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.864 [2024-07-22 17:00:23.494849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.864 [2024-07-22 17:00:23.494864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.864 [2024-07-22 17:00:23.494877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.864 [2024-07-22 17:00:23.494906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.864 qpair failed and we were unable to recover it. 00:47:03.864 [2024-07-22 17:00:23.504772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:03.864 [2024-07-22 17:00:23.504909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:03.864 [2024-07-22 17:00:23.504955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:03.864 [2024-07-22 17:00:23.504980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:03.864 [2024-07-22 17:00:23.504994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:03.864 [2024-07-22 17:00:23.505024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:03.864 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.514751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.514861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.514890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.122 [2024-07-22 17:00:23.514907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.122 [2024-07-22 17:00:23.514920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.122 [2024-07-22 17:00:23.514950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.122 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.524792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.524923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.524951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.122 [2024-07-22 17:00:23.524973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.122 [2024-07-22 17:00:23.524988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.122 [2024-07-22 17:00:23.525019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.122 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.534779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.534881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.534908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.122 [2024-07-22 17:00:23.534923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.122 [2024-07-22 17:00:23.534935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.122 [2024-07-22 17:00:23.534990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.122 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.544815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.544930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.544980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.122 [2024-07-22 17:00:23.544997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.122 [2024-07-22 17:00:23.545015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.122 [2024-07-22 17:00:23.545046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.122 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.554841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.554972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.554998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.122 [2024-07-22 17:00:23.555014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.122 [2024-07-22 17:00:23.555027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.122 [2024-07-22 17:00:23.555056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.122 qpair failed and we were unable to recover it. 00:47:04.122 [2024-07-22 17:00:23.564856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.122 [2024-07-22 17:00:23.564995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.122 [2024-07-22 17:00:23.565023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.565038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.565051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.565080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.574915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.575048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.575074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.575090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.575103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.575131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.584982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.585099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.585126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.585140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.585153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.585183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.595016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.595134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.595161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.595176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.595188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.595218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.605054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.605164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.605191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.605206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.605219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.605248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.615022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.615148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.615175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.615190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.615203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.615232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.625053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.625174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.625201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.625216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.625229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.625259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.635163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.635300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.635325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.635340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.635358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.635387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.645121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.645236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.645264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.645294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.645307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.645336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.655227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.655363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.655388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.655403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.655416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.655444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.665226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.665373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.665399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.665421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.665433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.665461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.675184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.675318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.675343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.675358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.675371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.675399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.685328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.685445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.685480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.123 [2024-07-22 17:00:23.685495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.123 [2024-07-22 17:00:23.685508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.123 [2024-07-22 17:00:23.685537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.123 qpair failed and we were unable to recover it. 00:47:04.123 [2024-07-22 17:00:23.695286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.123 [2024-07-22 17:00:23.695394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.123 [2024-07-22 17:00:23.695421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.695436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.695449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.695484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.705393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.705502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.705528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.705543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.705555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.705584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.715415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.715519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.715544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.715559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.715583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.715612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.725434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.725575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.725599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.725613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.725630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.725658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.735493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.735611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.735637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.735652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.735665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.735692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.745523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.745632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.745657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.745672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.745684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.745723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.755535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.755665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.755697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.755712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.755724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.755753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.124 [2024-07-22 17:00:23.765496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.124 [2024-07-22 17:00:23.765601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.124 [2024-07-22 17:00:23.765626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.124 [2024-07-22 17:00:23.765640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.124 [2024-07-22 17:00:23.765653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.124 [2024-07-22 17:00:23.765681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.124 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.775501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.775610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.775643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.775660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.775673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.775707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.785567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.785679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.785706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.785721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.785734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.785762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.795571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.795681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.795708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.795723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.795735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.795763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.805605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.805710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.805736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.805750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.805763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.805791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.815631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.815736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.815761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.815781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.815795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.815823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.825730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.825855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.825880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.825895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.825916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.825959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.835703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.835811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.835836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.835851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.835865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.835894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.845709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.845815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.845840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.845855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.845868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.845896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.855740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.855867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.855893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.855907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.855920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.855971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.865796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.865903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.865929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.865958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.865979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.866010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.875869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.876000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.876027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.383 [2024-07-22 17:00:23.876042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.383 [2024-07-22 17:00:23.876055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.383 [2024-07-22 17:00:23.876084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.383 qpair failed and we were unable to recover it. 00:47:04.383 [2024-07-22 17:00:23.885850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.383 [2024-07-22 17:00:23.885991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.383 [2024-07-22 17:00:23.886017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.886033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.886046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.886075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.895882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.896011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.896038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.896054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.896067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.896096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.905895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.906017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.906043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.906064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.906078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.906107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.915918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.916044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.916070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.916085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.916097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.916127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.926026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.926167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.926193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.926208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.926228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.926257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.936000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.936108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.936134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.936149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.936162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.936191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.946002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.946140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.946166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.946181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.946194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.946224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.956038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.956155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.956181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.956196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.956209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.956238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.966054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.966171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.966197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.966212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.966225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.966269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.976157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.976281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.976306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.976321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.976333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.976372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.986136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.986251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.986292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.986307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.986319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.986348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:23.996161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:23.996288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:23.996313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:23.996334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:23.996347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:23.996376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:24.006193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:24.006315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.384 [2024-07-22 17:00:24.006340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.384 [2024-07-22 17:00:24.006355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.384 [2024-07-22 17:00:24.006367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.384 [2024-07-22 17:00:24.006395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.384 qpair failed and we were unable to recover it. 00:47:04.384 [2024-07-22 17:00:24.016299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.384 [2024-07-22 17:00:24.016415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.385 [2024-07-22 17:00:24.016440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.385 [2024-07-22 17:00:24.016455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.385 [2024-07-22 17:00:24.016468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.385 [2024-07-22 17:00:24.016496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.385 qpair failed and we were unable to recover it. 00:47:04.385 [2024-07-22 17:00:24.026239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.385 [2024-07-22 17:00:24.026381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.385 [2024-07-22 17:00:24.026406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.385 [2024-07-22 17:00:24.026421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.385 [2024-07-22 17:00:24.026433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.385 [2024-07-22 17:00:24.026462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.385 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.036269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.036428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.036461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.036488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.036513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.036560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.046324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.046428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.046455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.046469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.046483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.046513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.056338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.056485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.056512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.056528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.056541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.056570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.066404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.066527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.066553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.066568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.066581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.066610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.076433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.076538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.076563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.076578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.076591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.076620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.086416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.086519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.086548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.086564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.086578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.086607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.096437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.096538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.096563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.096577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.096590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.096619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.106481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.106594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.106628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.106643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.106657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.106685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.116456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.643 [2024-07-22 17:00:24.116559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.643 [2024-07-22 17:00:24.116584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.643 [2024-07-22 17:00:24.116599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.643 [2024-07-22 17:00:24.116613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.643 [2024-07-22 17:00:24.116642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.643 qpair failed and we were unable to recover it. 00:47:04.643 [2024-07-22 17:00:24.126522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.126630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.126654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.126668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.126681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.126715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.136626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.136755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.136782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.136797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.136810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.136839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.146595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.146706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.146731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.146746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.146759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.146788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.156609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.156716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.156740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.156755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.156768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.156797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.166634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.166740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.166765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.166780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.166793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.166823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.176659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.176764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.176804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.176820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.176833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.176862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.186655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.186800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.186826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.186841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.186855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.186883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.196741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.196868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.196895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.196910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.196922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.196975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.206792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.206902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.206927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.206942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.206977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.207009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.216751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.216858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.216882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.216897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.216911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.216960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.226794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.226905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.226929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.226943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.226981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.227013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.236848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.236957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.237002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.237018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.237032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.237062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.246934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.644 [2024-07-22 17:00:24.247128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.644 [2024-07-22 17:00:24.247156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.644 [2024-07-22 17:00:24.247171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.644 [2024-07-22 17:00:24.247185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.644 [2024-07-22 17:00:24.247215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.644 qpair failed and we were unable to recover it. 00:47:04.644 [2024-07-22 17:00:24.256874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.645 [2024-07-22 17:00:24.257049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.645 [2024-07-22 17:00:24.257075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.645 [2024-07-22 17:00:24.257091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.645 [2024-07-22 17:00:24.257104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.645 [2024-07-22 17:00:24.257134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.645 qpair failed and we were unable to recover it. 00:47:04.645 [2024-07-22 17:00:24.266917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.645 [2024-07-22 17:00:24.267096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.645 [2024-07-22 17:00:24.267128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.645 [2024-07-22 17:00:24.267146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.645 [2024-07-22 17:00:24.267159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.645 [2024-07-22 17:00:24.267188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.645 qpair failed and we were unable to recover it. 00:47:04.645 [2024-07-22 17:00:24.276988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.645 [2024-07-22 17:00:24.277092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.645 [2024-07-22 17:00:24.277117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.645 [2024-07-22 17:00:24.277131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.645 [2024-07-22 17:00:24.277145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.645 [2024-07-22 17:00:24.277174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.645 qpair failed and we were unable to recover it. 00:47:04.645 [2024-07-22 17:00:24.286980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.645 [2024-07-22 17:00:24.287092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.645 [2024-07-22 17:00:24.287127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.645 [2024-07-22 17:00:24.287144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.645 [2024-07-22 17:00:24.287157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.645 [2024-07-22 17:00:24.287188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.645 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.297003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.297120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.297150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.297167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.297181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.297213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.307023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.307138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.307162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.307178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.307191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.307227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.317090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.317218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.317246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.317262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.317275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.317321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.327070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.327186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.327214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.327230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.327244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.327290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.337123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.337285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.337311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.337327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.337340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.337368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.347168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.347374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.347400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.347415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.347428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.347457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.357223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.357355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.357385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.357401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.357415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.357443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.367230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.367370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.367396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.367411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.367424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.367453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.903 qpair failed and we were unable to recover it. 00:47:04.903 [2024-07-22 17:00:24.377232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.903 [2024-07-22 17:00:24.377351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.903 [2024-07-22 17:00:24.377377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.903 [2024-07-22 17:00:24.377391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.903 [2024-07-22 17:00:24.377404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.903 [2024-07-22 17:00:24.377433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.387302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.387424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.387448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.387463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.387476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.387505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.397303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.397415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.397439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.397454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.397472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.397502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.407358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.407493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.407519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.407534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.407546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.407575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.417378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.417494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.417519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.417535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.417548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.417577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.427463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.427605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.427631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.427647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.427660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.427690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.437407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.437518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.437543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.437557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.437571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.437599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.447466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.447605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.447632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.447647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.447661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.447690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.457477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.457585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.457609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.457624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.457637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.457667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.467568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.467681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.467707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.467721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.467734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.467763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.477564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.477701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.477727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.477743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.477757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.477786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.487550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.487661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.487685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.487700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.487718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.487748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.497588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.497696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.497720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.497735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.497748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.497777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.904 [2024-07-22 17:00:24.507625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.904 [2024-07-22 17:00:24.507736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.904 [2024-07-22 17:00:24.507763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.904 [2024-07-22 17:00:24.507779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.904 [2024-07-22 17:00:24.507792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.904 [2024-07-22 17:00:24.507821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.904 qpair failed and we were unable to recover it. 00:47:04.905 [2024-07-22 17:00:24.517623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.905 [2024-07-22 17:00:24.517732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.905 [2024-07-22 17:00:24.517756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.905 [2024-07-22 17:00:24.517770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.905 [2024-07-22 17:00:24.517783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.905 [2024-07-22 17:00:24.517812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.905 qpair failed and we were unable to recover it. 00:47:04.905 [2024-07-22 17:00:24.527650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.905 [2024-07-22 17:00:24.527845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.905 [2024-07-22 17:00:24.527871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.905 [2024-07-22 17:00:24.527887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.905 [2024-07-22 17:00:24.527900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.905 [2024-07-22 17:00:24.527928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.905 qpair failed and we were unable to recover it. 00:47:04.905 [2024-07-22 17:00:24.537786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.905 [2024-07-22 17:00:24.537907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.905 [2024-07-22 17:00:24.537931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.905 [2024-07-22 17:00:24.537961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.905 [2024-07-22 17:00:24.537983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.905 [2024-07-22 17:00:24.538014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.905 qpair failed and we were unable to recover it. 00:47:04.905 [2024-07-22 17:00:24.547758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:04.905 [2024-07-22 17:00:24.547869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:04.905 [2024-07-22 17:00:24.547893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:04.905 [2024-07-22 17:00:24.547908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:04.905 [2024-07-22 17:00:24.547921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:04.905 [2024-07-22 17:00:24.547974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:04.905 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.557778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.557892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.557921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.557937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.557973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.558005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.567845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.568011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.568039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.568055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.568070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.568100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.577819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.577925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.577949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.577991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.578007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.578040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.587799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.587913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.587939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.587978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.587992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.588022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.597856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.597987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.598015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.598030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.598043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.598072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.607890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.608026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.608052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.608067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.608081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.608111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.617907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.618042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.618069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.618084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.618098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.618128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.628052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.628172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.628198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.628213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.628227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.628256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.637974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.638120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.638146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.638161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.638175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.638204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.648005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.648111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.648138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.648153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.648168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.648198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.658032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.658195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.658222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.658238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.658251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.658295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.668060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.164 [2024-07-22 17:00:24.668171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.164 [2024-07-22 17:00:24.668198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.164 [2024-07-22 17:00:24.668220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.164 [2024-07-22 17:00:24.668234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.164 [2024-07-22 17:00:24.668279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.164 qpair failed and we were unable to recover it. 00:47:05.164 [2024-07-22 17:00:24.678134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.678257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.678298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.678313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.678333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.678361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.688188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.688305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.688331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.688346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.688359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.688389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.698132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.698269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.698296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.698327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.698341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.698370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.708185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.708312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.708337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.708352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.708365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.708394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.718177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.718303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.718329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.718344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.718356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.718384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.728348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.728464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.728488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.728502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.728514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.728542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.738315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.738443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.738469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.738484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.738497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.738529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.748334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.748507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.748532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.748547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.748560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.748598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.758323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.758439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.758465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.758485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.758500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.758528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.768322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.768424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.768449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.768464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.768479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.768508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.778459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.778602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.778627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.778642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.778657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.778685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.788390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.788518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.788544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.788559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.788573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.788602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.798442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.165 [2024-07-22 17:00:24.798582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.165 [2024-07-22 17:00:24.798608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.165 [2024-07-22 17:00:24.798622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.165 [2024-07-22 17:00:24.798647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.165 [2024-07-22 17:00:24.798675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.165 qpair failed and we were unable to recover it. 00:47:05.165 [2024-07-22 17:00:24.808446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.166 [2024-07-22 17:00:24.808551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.166 [2024-07-22 17:00:24.808587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.166 [2024-07-22 17:00:24.808627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.166 [2024-07-22 17:00:24.808654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.166 [2024-07-22 17:00:24.808703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.166 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.818596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.818724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.818753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.818769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.818782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.818813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.828491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.828604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.828632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.828648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.828662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.828692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.838599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.838705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.838731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.838747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.838760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.838788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.848633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.848745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.848776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.848792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.848806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.848835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.858572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.858678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.858704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.858719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.858732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.858761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.868625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.868738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.868763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.868778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.868791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.868819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.878609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.878719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.878745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.878759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.878772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.878801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.888737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.888849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.888876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.888891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.888904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.888934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.898734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.898844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.898870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.898884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.898897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.898926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.908840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.426 [2024-07-22 17:00:24.908979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.426 [2024-07-22 17:00:24.909015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.426 [2024-07-22 17:00:24.909029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.426 [2024-07-22 17:00:24.909042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.426 [2024-07-22 17:00:24.909071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.426 qpair failed and we were unable to recover it. 00:47:05.426 [2024-07-22 17:00:24.918746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.918887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.918912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.918927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.918939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.918991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.928849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.928993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.929021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.929036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.929049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.929078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.938826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.938996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.939028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.939045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.939059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.939089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.948921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.949076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.949103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.949118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.949132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.949161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.958894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.959088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.959115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.959130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.959144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.959174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.968986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.969104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.969131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.969158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.969171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.969201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.978979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.979140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.979166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.979182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.979196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.979231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.989014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.989205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.989232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.989263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.989277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.989306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:24.999016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:24.999129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:24.999156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:24.999172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:24.999185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:24.999215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.009079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.009196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.009222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.009237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.009251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:25.009300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.019118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.019237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.019263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.019278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.019291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:25.019336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.029157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.029297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.029327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.029343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.029355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:25.029384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.039093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.039222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.039262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.039277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.039290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:25.039318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.049138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.049282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.049307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.049321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.049334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.427 [2024-07-22 17:00:25.049362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.427 qpair failed and we were unable to recover it. 00:47:05.427 [2024-07-22 17:00:25.059268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.427 [2024-07-22 17:00:25.059407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.427 [2024-07-22 17:00:25.059432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.427 [2024-07-22 17:00:25.059446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.427 [2024-07-22 17:00:25.059459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.428 [2024-07-22 17:00:25.059486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.428 qpair failed and we were unable to recover it. 00:47:05.428 [2024-07-22 17:00:25.069202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.428 [2024-07-22 17:00:25.069311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.428 [2024-07-22 17:00:25.069337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.428 [2024-07-22 17:00:25.069351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.428 [2024-07-22 17:00:25.069364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.428 [2024-07-22 17:00:25.069401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.428 qpair failed and we were unable to recover it. 00:47:05.687 [2024-07-22 17:00:25.079287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.687 [2024-07-22 17:00:25.079424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.687 [2024-07-22 17:00:25.079454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.687 [2024-07-22 17:00:25.079470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.687 [2024-07-22 17:00:25.079482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.079511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.089347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.089485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.089513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.089528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.089541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.089569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.099255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.099372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.099398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.099413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.099426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.099454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.109314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.109432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.109458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.109473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.109486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.109514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.119395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.119498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.119529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.119544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.119557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.119585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.129395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.129503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.129528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.129543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.129556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.129584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.139390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.139509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.139535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.139550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.139563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.139591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.149417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.149528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.149553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.149568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.149580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.149609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.159452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.159575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.159601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.159615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.159632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.159661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.169513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.169649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.169675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.169690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.169703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.169732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.179488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.179604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.179630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.179645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.179658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.179686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.189606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.189724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.189749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.189763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.189775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.189804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.199642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.199748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.199773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.199788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.199801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.199828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.209689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.688 [2024-07-22 17:00:25.209803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.688 [2024-07-22 17:00:25.209828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.688 [2024-07-22 17:00:25.209843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.688 [2024-07-22 17:00:25.209855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.688 [2024-07-22 17:00:25.209883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.688 qpair failed and we were unable to recover it. 00:47:05.688 [2024-07-22 17:00:25.219592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.219700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.219725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.219740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.219752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.219781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.229725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.229829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.229853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.229868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.229880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.229908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.239684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.239794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.239820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.239835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.239848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.239875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.249685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.249790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.249817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.249832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.249849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.249878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.259749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.259886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.259913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.259927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.259940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.259992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.269831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.269938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.269986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.270002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.270015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.270044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.279775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.279888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.279914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.279928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.279956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.279994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.289888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.290013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.290040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.290055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.290068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.290098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.299898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.300070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.300096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.300111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.300124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.300154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.310008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.310132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.310159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.310174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.310186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.310216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.319990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.320101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.320126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.320142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.320155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.320184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.689 [2024-07-22 17:00:25.329924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.689 [2024-07-22 17:00:25.330099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.689 [2024-07-22 17:00:25.330126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.689 [2024-07-22 17:00:25.330141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.689 [2024-07-22 17:00:25.330154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.689 [2024-07-22 17:00:25.330183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.689 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.340051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.340170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.340199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.340215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.340235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.340270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.349995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.350113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.350141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.350157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.350170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.350201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.360065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.360182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.360210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.360225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.360238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.360268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.370107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.370228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.370254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.370284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.370297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.370327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.380081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.380194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.380221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.380236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.380249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.380292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.390129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.390241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.390281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.390296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.390309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.390337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.400244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.400419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.400444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.400459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.400472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.400500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.410181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.410314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.410339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.410353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.410366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.410395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.420220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.420345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.420371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.420385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.420398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.420426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.948 [2024-07-22 17:00:25.430232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.948 [2024-07-22 17:00:25.430357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.948 [2024-07-22 17:00:25.430383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.948 [2024-07-22 17:00:25.430403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.948 [2024-07-22 17:00:25.430416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.948 [2024-07-22 17:00:25.430444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.948 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.440342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.440451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.440477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.440491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.440504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.440532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.450273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.450375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.450401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.450416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.450428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.450456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.460355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.460467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.460492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.460507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.460520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.460548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.470356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.470468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.470494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.470508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.470520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.470549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.480344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.480461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.480487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.480502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.480514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.480542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.490457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.490571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.490596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.490610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.490623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.490652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.500466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.500596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.500623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.500637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.500650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.500679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.510462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.510586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.510611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.510626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.510638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.510666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.520494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.520618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.520645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.520664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.520677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.520705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.530486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.530591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.530616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.530631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.530644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.530671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.540589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.540707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.540732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.540746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.540759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.540787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.550547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.550652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.550678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.550692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.550705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.550732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.560552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.949 [2024-07-22 17:00:25.560683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.949 [2024-07-22 17:00:25.560708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.949 [2024-07-22 17:00:25.560723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.949 [2024-07-22 17:00:25.560736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.949 [2024-07-22 17:00:25.560764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.949 qpair failed and we were unable to recover it. 00:47:05.949 [2024-07-22 17:00:25.570568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.950 [2024-07-22 17:00:25.570676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.950 [2024-07-22 17:00:25.570701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.950 [2024-07-22 17:00:25.570715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.950 [2024-07-22 17:00:25.570728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.950 [2024-07-22 17:00:25.570756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.950 qpair failed and we were unable to recover it. 00:47:05.950 [2024-07-22 17:00:25.580627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.950 [2024-07-22 17:00:25.580742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.950 [2024-07-22 17:00:25.580767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.950 [2024-07-22 17:00:25.580782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.950 [2024-07-22 17:00:25.580795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.950 [2024-07-22 17:00:25.580823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.950 qpair failed and we were unable to recover it. 00:47:05.950 [2024-07-22 17:00:25.590630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:05.950 [2024-07-22 17:00:25.590771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:05.950 [2024-07-22 17:00:25.590797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:05.950 [2024-07-22 17:00:25.590811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:05.950 [2024-07-22 17:00:25.590824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:05.950 [2024-07-22 17:00:25.590852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:05.950 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.600680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.600791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.600820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.600836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.600850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.600880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.610801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.610912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.610959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.610984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.610998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.611028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.620737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.620841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.620866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.620880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.620893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.620922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.630840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.630978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.631005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.631021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.631034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.631064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.640794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.640906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.640932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.640963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.640985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.641015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.650843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.650960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.650993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.651008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.651021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.651051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.660930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.661077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.661104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.661122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.661136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.661165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.670998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.671123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.671150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.671166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.671179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.671208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.680894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.681015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.681042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.681057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.681070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.681099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.690981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.691124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.691151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.691166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.691179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.691208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.701048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.701152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.701183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.701200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.701213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.701242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.209 [2024-07-22 17:00:25.711077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.209 [2024-07-22 17:00:25.711190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.209 [2024-07-22 17:00:25.711217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.209 [2024-07-22 17:00:25.711232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.209 [2024-07-22 17:00:25.711260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.209 [2024-07-22 17:00:25.711290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.209 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.721017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.721137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.721163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.721179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.721192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.721221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.731041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.731156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.731180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.731195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.731208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.731236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.741069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.741185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.741211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.741226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.741254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.741288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.751108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.751221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.751247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.751277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.751290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.751319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.761226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.761354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.761379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.761394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.761406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.761435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.771242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.771384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.771409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.771425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.771438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.771466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.781178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.781288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.781328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.781343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.781355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.781383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.791220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.791353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.791383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.791399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.791411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.791440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.801316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.801423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.801449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.801464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.801477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.801506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.811357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.811505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.811531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.811545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.811558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.811586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.821357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.821471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.821497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.821511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.821524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.821552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.831322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.831477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.831502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.831517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.831530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.831563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.841339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.841464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.210 [2024-07-22 17:00:25.841490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.210 [2024-07-22 17:00:25.841506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.210 [2024-07-22 17:00:25.841518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.210 [2024-07-22 17:00:25.841546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.210 qpair failed and we were unable to recover it. 00:47:06.210 [2024-07-22 17:00:25.851467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.210 [2024-07-22 17:00:25.851581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.211 [2024-07-22 17:00:25.851607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.211 [2024-07-22 17:00:25.851622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.211 [2024-07-22 17:00:25.851635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.211 [2024-07-22 17:00:25.851663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.211 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.861427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.861544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.861574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.861590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.861603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.861633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.871453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.871566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.871593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.871608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.871620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.871651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.881495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.881639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.881670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.881685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.881697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.881725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.891486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.891598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.891624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.891639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.891652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.891680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.901520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.901624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.901650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.901665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.901678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.901706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.911575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.911685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.911711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.911726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.911740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.911769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.921591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.921695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.921721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.921736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.921754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.921783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.931670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.931781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.931806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.931821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.931834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.931862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.941662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.941769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.941795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.941809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.941822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.941850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.951707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.951819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.951845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.951860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.951872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.951900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.961804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.961907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.961933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.961971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.961986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.962016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.971734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.971843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.971869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.470 [2024-07-22 17:00:25.971883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.470 [2024-07-22 17:00:25.971896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.470 [2024-07-22 17:00:25.971925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.470 qpair failed and we were unable to recover it. 00:47:06.470 [2024-07-22 17:00:25.981780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.470 [2024-07-22 17:00:25.981885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.470 [2024-07-22 17:00:25.981910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:25.981924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:25.981937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:25.982006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:25.991861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:25.991992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:25.992029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:25.992044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:25.992058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:25.992098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.001788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.001894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.001919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.001934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.001960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.001998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.011895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.012045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.012072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.012087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.012106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.012147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.021905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.022036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.022063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.022078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.022091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.022129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.031917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.032056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.032082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.032097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.032110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.032139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.041916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.042062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.042089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.042104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.042116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.042145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.051939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.052073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.052100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.052115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.052127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.052157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.062002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.062115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.062142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.062157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.062170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.062199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.072038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.072184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.072210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.072226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.072238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.072282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.082019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.082130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.082157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.082171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.082184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.082214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.092090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.092197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.092223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.092238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.092267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.092306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.102123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.102224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.102251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.102266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.102284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.102331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.471 [2024-07-22 17:00:26.112166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.471 [2024-07-22 17:00:26.112276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.471 [2024-07-22 17:00:26.112315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.471 [2024-07-22 17:00:26.112330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.471 [2024-07-22 17:00:26.112342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.471 [2024-07-22 17:00:26.112370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.471 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.122169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.122285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.122327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.122344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.122357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.122387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.132283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.132388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.132414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.132438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.132451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.132481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.142195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.142300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.142342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.142356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.142369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.142399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.152275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.152386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.152413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.152428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.152441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.152470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.162284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.162388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.162413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.162428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.162441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.162469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.172318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.172426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.172452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.172468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.172480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.172509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.182396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.182500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.182526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.182540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.182554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.182582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.192375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.192489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.192514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.192534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.192547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.192576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.202453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.202598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.202625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.202654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.202667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.202706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.212426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.212537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.212562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.731 [2024-07-22 17:00:26.212576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.731 [2024-07-22 17:00:26.212589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.731 [2024-07-22 17:00:26.212617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.731 qpair failed and we were unable to recover it. 00:47:06.731 [2024-07-22 17:00:26.222456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.731 [2024-07-22 17:00:26.222567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.731 [2024-07-22 17:00:26.222593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.222607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.222620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.222649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.232503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.232610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.232636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.232650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.232663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.232690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.242595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.242760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.242785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.242800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.242812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.242844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.252579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.252682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.252707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.252721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.252733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.252761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.262575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.262679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.262704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.262718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.262732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.262760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.272611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.272722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.272747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.272761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.272773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.272801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.282657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.282762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.282787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.282806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.282819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.282848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.292646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.292754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.292779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.292794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.292807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.292835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.302711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.302857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.302882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.302897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.302910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.302954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.312689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.312801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.312826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.312841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.312854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.312882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.322835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.322996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.323023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.323039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.323053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.323082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.332765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.332870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.332896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.332911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.332924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.332975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.342832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.342976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.343003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.343019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.732 [2024-07-22 17:00:26.343033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.732 [2024-07-22 17:00:26.343062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.732 qpair failed and we were unable to recover it. 00:47:06.732 [2024-07-22 17:00:26.352848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.732 [2024-07-22 17:00:26.352984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.732 [2024-07-22 17:00:26.353010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.732 [2024-07-22 17:00:26.353026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.733 [2024-07-22 17:00:26.353039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.733 [2024-07-22 17:00:26.353069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.733 qpair failed and we were unable to recover it. 00:47:06.733 [2024-07-22 17:00:26.362874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.733 [2024-07-22 17:00:26.363006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.733 [2024-07-22 17:00:26.363031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.733 [2024-07-22 17:00:26.363047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.733 [2024-07-22 17:00:26.363061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.733 [2024-07-22 17:00:26.363091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.733 qpair failed and we were unable to recover it. 00:47:06.733 [2024-07-22 17:00:26.372897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.733 [2024-07-22 17:00:26.373028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.733 [2024-07-22 17:00:26.373053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.733 [2024-07-22 17:00:26.373074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.733 [2024-07-22 17:00:26.373088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.733 [2024-07-22 17:00:26.373118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.733 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.382916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.383054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.383083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.383099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.383124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.383155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.393064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.393187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.393216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.393233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.393246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.393291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.403050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.403164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.403190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.403206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.403219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.403249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.413024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.413140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.413168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.413183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.413197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.413226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.423037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.423202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.423229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.423245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.423273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.423304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.433066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.433184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.433210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.433226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.433255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.433285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.443077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.443191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.443219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.443235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.443249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.443293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.453131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.453241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.453283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.453299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.453312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.453341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.463152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.463328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.463359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.463376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.463389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.990 [2024-07-22 17:00:26.463418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.990 qpair failed and we were unable to recover it. 00:47:06.990 [2024-07-22 17:00:26.473241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.990 [2024-07-22 17:00:26.473369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.990 [2024-07-22 17:00:26.473394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.990 [2024-07-22 17:00:26.473410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.990 [2024-07-22 17:00:26.473423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.473452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.483217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.483339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.483366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.483381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.483394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.483423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.493241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.493359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.493386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.493401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.493414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.493443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.503275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.503405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.503431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.503447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.503460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.503493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.513334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.513448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.513474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.513489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.513502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.513531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.523388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.523496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.523523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.523538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.523551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.523579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.533353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.533463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.533488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.533504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.533517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.533546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.543427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.543557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.543584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.543599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.543613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.543642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.553397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.553508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.553539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.553555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.553569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.553597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.563412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.563519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.563543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.563557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.563571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.563599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.573495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.573635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.573661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.573676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.573689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.573718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.583552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.583678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.583704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.583719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.583732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.583760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.593542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.593662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.593689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.593704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.593717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.593751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.603525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.603631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.603655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.603669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.603682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.603711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.613585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.613695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.613720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.613734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.613747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.613777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.623611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.991 [2024-07-22 17:00:26.623717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.991 [2024-07-22 17:00:26.623742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.991 [2024-07-22 17:00:26.623758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.991 [2024-07-22 17:00:26.623771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.991 [2024-07-22 17:00:26.623799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.991 qpair failed and we were unable to recover it. 00:47:06.991 [2024-07-22 17:00:26.633648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:06.992 [2024-07-22 17:00:26.633757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:06.992 [2024-07-22 17:00:26.633780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:06.992 [2024-07-22 17:00:26.633794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:06.992 [2024-07-22 17:00:26.633808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:06.992 [2024-07-22 17:00:26.633837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:06.992 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.643677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.643798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.643834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.643851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.643865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.643896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.653711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.653858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.653885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.653900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.653913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.653942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.663723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.663832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.663856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.663871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.663884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.663913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.673763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.673871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.673897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.673912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.673925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.673979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.683767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.683893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.683920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.683935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.683973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.684023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.693831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.693932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.693958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.693997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.694012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.694042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.703839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.703977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.704004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.704019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.704045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.704075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.713884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.714028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.714054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.714069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.714082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.250 [2024-07-22 17:00:26.714111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.250 qpair failed and we were unable to recover it. 00:47:07.250 [2024-07-22 17:00:26.724021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.250 [2024-07-22 17:00:26.724138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.250 [2024-07-22 17:00:26.724165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.250 [2024-07-22 17:00:26.724180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.250 [2024-07-22 17:00:26.724194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.724223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.733935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.734070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.734100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.734116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.734129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.734158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.743992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.744119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.744147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.744163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.744177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.744207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.754187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.754317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.754343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.754358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.754370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.754398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.764078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.764274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.764300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.764315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.764329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.764358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.774071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.774217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.774244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.774258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.774279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.774325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.784117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.784233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.784275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.784291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.784305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.784333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.794192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.794379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.794405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.794419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.794432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.794460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.804156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.804320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.804346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.804361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.804374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.804403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.814190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.814315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.814340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.814355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.814367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.814396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.824329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.824439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.824465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.824479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.824492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.824521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.834338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.834449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.834473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.834488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.834500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.834530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.844327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.844478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.844504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.844518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.844531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.844560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.854323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.251 [2024-07-22 17:00:26.854443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.251 [2024-07-22 17:00:26.854468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.251 [2024-07-22 17:00:26.854483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.251 [2024-07-22 17:00:26.854496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.251 [2024-07-22 17:00:26.854525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.251 qpair failed and we were unable to recover it. 00:47:07.251 [2024-07-22 17:00:26.864343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.252 [2024-07-22 17:00:26.864449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.252 [2024-07-22 17:00:26.864474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.252 [2024-07-22 17:00:26.864489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.252 [2024-07-22 17:00:26.864508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.252 [2024-07-22 17:00:26.864537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.252 qpair failed and we were unable to recover it. 00:47:07.252 [2024-07-22 17:00:26.874427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.252 [2024-07-22 17:00:26.874585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.252 [2024-07-22 17:00:26.874616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.252 [2024-07-22 17:00:26.874631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.252 [2024-07-22 17:00:26.874646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.252 [2024-07-22 17:00:26.874674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.252 qpair failed and we were unable to recover it. 00:47:07.252 [2024-07-22 17:00:26.884374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.252 [2024-07-22 17:00:26.884501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.252 [2024-07-22 17:00:26.884526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.252 [2024-07-22 17:00:26.884541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.252 [2024-07-22 17:00:26.884554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.252 [2024-07-22 17:00:26.884583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.252 qpair failed and we were unable to recover it. 00:47:07.252 [2024-07-22 17:00:26.894449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.252 [2024-07-22 17:00:26.894618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.252 [2024-07-22 17:00:26.894644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.252 [2024-07-22 17:00:26.894664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.252 [2024-07-22 17:00:26.894680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.252 [2024-07-22 17:00:26.894710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.252 qpair failed and we were unable to recover it. 00:47:07.510 [2024-07-22 17:00:26.904432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.510 [2024-07-22 17:00:26.904572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.510 [2024-07-22 17:00:26.904601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.510 [2024-07-22 17:00:26.904617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.510 [2024-07-22 17:00:26.904630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.510 [2024-07-22 17:00:26.904659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.510 qpair failed and we were unable to recover it. 00:47:07.510 [2024-07-22 17:00:26.914491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.510 [2024-07-22 17:00:26.914613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.510 [2024-07-22 17:00:26.914640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.510 [2024-07-22 17:00:26.914655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.510 [2024-07-22 17:00:26.914669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.510 [2024-07-22 17:00:26.914697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.510 qpair failed and we were unable to recover it. 00:47:07.510 [2024-07-22 17:00:26.924542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.511 [2024-07-22 17:00:26.924704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.511 [2024-07-22 17:00:26.924730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.511 [2024-07-22 17:00:26.924745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.511 [2024-07-22 17:00:26.924758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.511 [2024-07-22 17:00:26.924787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 [2024-07-22 17:00:26.934540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.511 [2024-07-22 17:00:26.934647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.511 [2024-07-22 17:00:26.934673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.511 [2024-07-22 17:00:26.934688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.511 [2024-07-22 17:00:26.934701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.511 [2024-07-22 17:00:26.934729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 [2024-07-22 17:00:26.944584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:47:07.511 [2024-07-22 17:00:26.944688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:47:07.511 [2024-07-22 17:00:26.944714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:47:07.511 [2024-07-22 17:00:26.944729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:47:07.511 [2024-07-22 17:00:26.944741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x140c570 00:47:07.511 [2024-07-22 17:00:26.944770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 [2024-07-22 17:00:26.944916] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:47:07.511 A controller has encountered a failure and is being reset. 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 qpair failed and we were unable to recover it. 00:47:07.511 Controller properly reset. 00:47:07.511 Initializing NVMe Controllers 00:47:07.511 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:07.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:07.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:47:07.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:47:07.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:47:07.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:47:07.511 Initialization complete. Launching workers. 00:47:07.511 Starting thread on core 1 00:47:07.511 Starting thread on core 2 00:47:07.511 Starting thread on core 3 00:47:07.511 Starting thread on core 0 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:47:07.511 00:47:07.511 real 0m11.505s 00:47:07.511 user 0m21.218s 00:47:07.511 sys 0m5.541s 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:47:07.511 ************************************ 00:47:07.511 END TEST nvmf_target_disconnect_tc2 00:47:07.511 ************************************ 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:07.511 rmmod nvme_tcp 00:47:07.511 rmmod nvme_fabrics 00:47:07.511 rmmod nvme_keyring 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2968339 ']' 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2968339 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2968339 ']' 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 2968339 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:07.511 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2968339 00:47:07.769 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:47:07.770 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:47:07.770 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2968339' 00:47:07.770 killing process with pid 2968339 00:47:07.770 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 2968339 00:47:07.770 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 2968339 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:08.027 17:00:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:09.930 17:00:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:47:09.930 00:47:09.930 real 0m16.833s 00:47:09.930 user 0m47.888s 00:47:09.930 sys 0m7.817s 00:47:09.930 17:00:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:09.930 17:00:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:09.930 ************************************ 00:47:09.930 END TEST nvmf_target_disconnect 00:47:09.930 ************************************ 00:47:09.931 17:00:29 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:47:09.931 17:00:29 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:09.931 17:00:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:09.931 17:00:29 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:47:09.931 00:47:09.931 real 27m53.918s 00:47:09.931 user 76m4.037s 00:47:09.931 sys 6m48.389s 00:47:09.931 17:00:29 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:09.931 17:00:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:09.931 ************************************ 00:47:09.931 END TEST nvmf_tcp 00:47:09.931 ************************************ 00:47:09.931 17:00:29 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:47:09.931 17:00:29 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:09.931 17:00:29 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:09.931 17:00:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:09.931 17:00:29 -- common/autotest_common.sh@10 -- # set +x 00:47:09.931 ************************************ 00:47:09.931 START TEST spdkcli_nvmf_tcp 00:47:09.931 ************************************ 00:47:09.931 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:10.191 * Looking for test storage... 00:47:10.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:10.191 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2969527 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2969527 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 2969527 ']' 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:10.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:10.192 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:10.192 [2024-07-22 17:00:29.686588] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:10.192 [2024-07-22 17:00:29.686663] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969527 ] 00:47:10.192 EAL: No free 2048 kB hugepages reported on node 1 00:47:10.192 [2024-07-22 17:00:29.759336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:10.450 [2024-07-22 17:00:29.850835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:10.450 [2024-07-22 17:00:29.850841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:10.450 17:00:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:47:10.450 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:47:10.450 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:47:10.450 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:47:10.450 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:47:10.450 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:47:10.450 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:47:10.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:10.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:10.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:47:10.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:47:10.450 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:47:10.450 ' 00:47:13.018 [2024-07-22 17:00:32.549059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:14.395 [2024-07-22 17:00:33.789541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:47:16.921 [2024-07-22 17:00:36.076723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:47:18.816 [2024-07-22 17:00:38.050868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:47:20.188 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:47:20.188 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:47:20.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:20.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:20.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:47:20.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:47:20.188 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:47:20.188 17:00:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:20.753 17:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:47:20.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:47:20.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:20.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:47:20.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:47:20.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:47:20.753 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:47:20.753 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:47:20.753 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:47:20.753 ' 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:47:26.010 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:47:26.010 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:47:26.010 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:47:26.010 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2969527 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2969527 ']' 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2969527 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2969527 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:26.010 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2969527' 00:47:26.010 killing process with pid 2969527 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 2969527 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 2969527 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2969527 ']' 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2969527 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2969527 ']' 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2969527 00:47:26.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2969527) - No such process 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 2969527 is not found' 00:47:26.011 Process with pid 2969527 is not found 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:47:26.011 00:47:26.011 real 0m16.036s 00:47:26.011 user 0m33.896s 00:47:26.011 sys 0m0.838s 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:26.011 17:00:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.011 ************************************ 00:47:26.011 END TEST spdkcli_nvmf_tcp 00:47:26.011 ************************************ 00:47:26.011 17:00:45 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:26.011 17:00:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:26.011 17:00:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:26.011 17:00:45 -- common/autotest_common.sh@10 -- # set +x 00:47:26.011 ************************************ 00:47:26.011 START TEST nvmf_identify_passthru 00:47:26.011 ************************************ 00:47:26.011 17:00:45 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:26.269 * Looking for test storage... 00:47:26.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:26.269 17:00:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:26.269 17:00:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:26.269 17:00:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:26.269 17:00:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:26.269 17:00:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:26.269 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:26.270 17:00:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:26.270 17:00:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:47:26.270 17:00:45 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:47:26.270 17:00:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:47:28.800 17:00:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:47:28.800 Found 0000:82:00.0 (0x8086 - 0x159b) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:47:28.800 Found 0000:82:00.1 (0x8086 - 0x159b) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:47:28.800 Found net devices under 0000:82:00.0: cvl_0_0 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:28.800 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:47:28.801 Found net devices under 0000:82:00.1: cvl_0_1 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:47:28.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:28.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:47:28.801 00:47:28.801 --- 10.0.0.2 ping statistics --- 00:47:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:28.801 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:28.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:28.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:47:28.801 00:47:28.801 --- 10.0.0.1 ping statistics --- 00:47:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:28.801 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:28.801 17:00:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:81:00.0 00:47:28.801 17:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:81:00.0 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:81:00.0 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:81:00.0 ']' 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:47:28.801 17:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:47:28.801 EAL: No free 2048 kB hugepages reported on node 1 00:47:34.062 17:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ951302VM2P0BGN 00:47:34.062 17:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:47:34.062 17:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:47:34.062 17:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:47:34.062 EAL: No free 2048 kB hugepages reported on node 1 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2974590 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2974590 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 2974590 ']' 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:39.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 [2024-07-22 17:00:58.477220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:39.328 [2024-07-22 17:00:58.477328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:39.328 EAL: No free 2048 kB hugepages reported on node 1 00:47:39.328 [2024-07-22 17:00:58.555922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:39.328 [2024-07-22 17:00:58.646841] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:39.328 [2024-07-22 17:00:58.646903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:39.328 [2024-07-22 17:00:58.646927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:39.328 [2024-07-22 17:00:58.646941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:39.328 [2024-07-22 17:00:58.646952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:39.328 [2024-07-22 17:00:58.647023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:39.328 [2024-07-22 17:00:58.647077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:39.328 [2024-07-22 17:00:58.647193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:39.328 [2024-07-22 17:00:58.647195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 INFO: Log level set to 20 00:47:39.328 INFO: Requests: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "method": "nvmf_set_config", 00:47:39.328 "id": 1, 00:47:39.328 "params": { 00:47:39.328 "admin_cmd_passthru": { 00:47:39.328 "identify_ctrlr": true 00:47:39.328 } 00:47:39.328 } 00:47:39.328 } 00:47:39.328 00:47:39.328 INFO: response: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "id": 1, 00:47:39.328 "result": true 00:47:39.328 } 00:47:39.328 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 INFO: Setting log level to 20 00:47:39.328 INFO: Setting log level to 20 00:47:39.328 INFO: Log level set to 20 00:47:39.328 INFO: Log level set to 20 00:47:39.328 INFO: Requests: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "method": "framework_start_init", 00:47:39.328 "id": 1 00:47:39.328 } 00:47:39.328 00:47:39.328 INFO: Requests: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "method": "framework_start_init", 00:47:39.328 "id": 1 00:47:39.328 } 00:47:39.328 00:47:39.328 [2024-07-22 17:00:58.787363] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:47:39.328 INFO: response: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "id": 1, 00:47:39.328 "result": true 00:47:39.328 } 00:47:39.328 00:47:39.328 INFO: response: 00:47:39.328 { 00:47:39.328 "jsonrpc": "2.0", 00:47:39.328 "id": 1, 00:47:39.328 "result": true 00:47:39.328 } 00:47:39.328 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 INFO: Setting log level to 40 00:47:39.328 INFO: Setting log level to 40 00:47:39.328 INFO: Setting log level to 40 00:47:39.328 [2024-07-22 17:00:58.797482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:39.328 17:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.328 17:00:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 Nvme0n1 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 [2024-07-22 17:01:01.699072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 [ 00:47:42.623 { 00:47:42.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:42.623 "subtype": "Discovery", 00:47:42.623 "listen_addresses": [], 00:47:42.623 "allow_any_host": true, 00:47:42.623 "hosts": [] 00:47:42.623 }, 00:47:42.623 { 00:47:42.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:42.623 "subtype": "NVMe", 00:47:42.623 "listen_addresses": [ 00:47:42.623 { 00:47:42.623 "trtype": "TCP", 00:47:42.623 "adrfam": "IPv4", 00:47:42.623 "traddr": "10.0.0.2", 00:47:42.623 "trsvcid": "4420" 00:47:42.623 } 00:47:42.623 ], 00:47:42.623 "allow_any_host": true, 00:47:42.623 "hosts": [], 00:47:42.623 "serial_number": "SPDK00000000000001", 00:47:42.623 "model_number": "SPDK bdev Controller", 00:47:42.623 "max_namespaces": 1, 00:47:42.623 "min_cntlid": 1, 00:47:42.623 "max_cntlid": 65519, 00:47:42.623 "namespaces": [ 00:47:42.623 { 00:47:42.623 "nsid": 1, 00:47:42.623 "bdev_name": "Nvme0n1", 00:47:42.623 "name": "Nvme0n1", 00:47:42.623 "nguid": "5BD06084599241AEA30D01E2F15B79A7", 00:47:42.623 "uuid": "5bd06084-5992-41ae-a30d-01e2f15b79a7" 00:47:42.623 } 00:47:42.623 ] 00:47:42.623 } 00:47:42.623 ] 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:47:42.623 EAL: No free 2048 kB hugepages reported on node 1 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ951302VM2P0BGN 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:47:42.623 EAL: No free 2048 kB hugepages reported on node 1 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ951302VM2P0BGN '!=' PHLJ951302VM2P0BGN ']' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.623 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:47:42.623 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:42.623 17:01:01 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:42.623 rmmod nvme_tcp 00:47:42.623 rmmod nvme_fabrics 00:47:42.623 rmmod nvme_keyring 00:47:42.623 17:01:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:42.623 17:01:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:47:42.623 17:01:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:47:42.623 17:01:02 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2974590 ']' 00:47:42.623 17:01:02 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2974590 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 2974590 ']' 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 2974590 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2974590 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2974590' 00:47:42.623 killing process with pid 2974590 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 2974590 00:47:42.623 17:01:02 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 2974590 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:45.147 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:45.147 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:45.147 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.044 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:47:47.044 00:47:47.044 real 0m20.858s 00:47:47.044 user 0m31.268s 00:47:47.044 sys 0m2.804s 00:47:47.044 17:01:06 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:47.044 17:01:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:47.044 ************************************ 00:47:47.044 END TEST nvmf_identify_passthru 00:47:47.045 ************************************ 00:47:47.045 17:01:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:47.045 17:01:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:47:47.045 17:01:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:47.045 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:47:47.045 ************************************ 00:47:47.045 START TEST nvmf_dif 00:47:47.045 ************************************ 00:47:47.045 17:01:06 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:47.045 * Looking for test storage... 00:47:47.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:47.045 17:01:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:47.045 17:01:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:47.045 17:01:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:47.045 17:01:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.045 17:01:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.045 17:01:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.045 17:01:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:47:47.045 17:01:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:47:47.045 17:01:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:47.045 17:01:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:47.045 17:01:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:47:47.045 17:01:06 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:47:47.045 17:01:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:47:49.571 17:01:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:47:49.572 Found 0000:82:00.0 (0x8086 - 0x159b) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:47:49.572 Found 0000:82:00.1 (0x8086 - 0x159b) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:47:49.572 Found net devices under 0000:82:00.0: cvl_0_0 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:47:49.572 Found net devices under 0000:82:00.1: cvl_0_1 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:47:49.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:49.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:47:49.572 00:47:49.572 --- 10.0.0.2 ping statistics --- 00:47:49.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:49.572 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:49.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:49.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:47:49.572 00:47:49.572 --- 10.0.0.1 ping statistics --- 00:47:49.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:49.572 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:47:49.572 17:01:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:50.944 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:47:50.944 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:47:50.944 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:47:50.944 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:47:50.944 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:47:50.944 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:47:50.944 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:47:50.944 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:47:50.944 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:47:50.944 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:47:50.944 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:47:50.944 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:47:50.944 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:47:50.944 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:47:50.944 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:47:50.944 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:47:50.944 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:51.202 17:01:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:47:51.202 17:01:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2978357 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:51.202 17:01:10 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2978357 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 2978357 ']' 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:51.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:51.202 17:01:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:51.202 [2024-07-22 17:01:10.722121] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:51.202 [2024-07-22 17:01:10.722191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:51.202 EAL: No free 2048 kB hugepages reported on node 1 00:47:51.202 [2024-07-22 17:01:10.794756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.460 [2024-07-22 17:01:10.880450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:51.460 [2024-07-22 17:01:10.880500] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:51.460 [2024-07-22 17:01:10.880527] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:51.460 [2024-07-22 17:01:10.880539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:51.460 [2024-07-22 17:01:10.880548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:51.460 [2024-07-22 17:01:10.880580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:51.460 17:01:10 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:51.460 17:01:10 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:47:51.460 17:01:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:51.460 17:01:10 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:51.460 17:01:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 17:01:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:51.460 17:01:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:47:51.460 17:01:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 [2024-07-22 17:01:11.020313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.460 17:01:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 ************************************ 00:47:51.460 START TEST fio_dif_1_default 00:47:51.460 ************************************ 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 bdev_null0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:51.460 [2024-07-22 17:01:11.080602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:47:51.460 { 00:47:51.460 "params": { 00:47:51.460 "name": "Nvme$subsystem", 00:47:51.460 "trtype": "$TEST_TRANSPORT", 00:47:51.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:51.460 "adrfam": "ipv4", 00:47:51.460 "trsvcid": "$NVMF_PORT", 00:47:51.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:51.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:51.460 "hdgst": ${hdgst:-false}, 00:47:51.460 "ddgst": ${ddgst:-false} 00:47:51.460 }, 00:47:51.460 "method": "bdev_nvme_attach_controller" 00:47:51.460 } 00:47:51.460 EOF 00:47:51.460 )") 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:47:51.460 17:01:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:47:51.460 "params": { 00:47:51.460 "name": "Nvme0", 00:47:51.460 "trtype": "tcp", 00:47:51.460 "traddr": "10.0.0.2", 00:47:51.460 "adrfam": "ipv4", 00:47:51.460 "trsvcid": "4420", 00:47:51.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:51.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:51.460 "hdgst": false, 00:47:51.460 "ddgst": false 00:47:51.460 }, 00:47:51.460 "method": "bdev_nvme_attach_controller" 00:47:51.460 }' 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:51.718 17:01:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:51.718 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:47:51.718 fio-3.35 00:47:51.718 Starting 1 thread 00:47:51.974 EAL: No free 2048 kB hugepages reported on node 1 00:48:04.167 00:48:04.167 filename0: (groupid=0, jobs=1): err= 0: pid=2978583: Mon Jul 22 17:01:21 2024 00:48:04.167 read: IOPS=186, BW=747KiB/s (765kB/s)(7488KiB/10019msec) 00:48:04.167 slat (nsec): min=6703, max=65445, avg=9388.96, stdev=4373.84 00:48:04.167 clat (usec): min=623, max=44214, avg=21377.37, stdev=20556.88 00:48:04.167 lat (usec): min=630, max=44246, avg=21386.75, stdev=20557.31 00:48:04.167 clat percentiles (usec): 00:48:04.167 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 717], 20.00th=[ 734], 00:48:04.167 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[41157], 60.00th=[41157], 00:48:04.167 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:48:04.167 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:48:04.167 | 99.99th=[44303] 00:48:04.167 bw ( KiB/s): min= 672, max= 768, per=99.95%, avg=747.20, stdev=31.62, samples=20 00:48:04.167 iops : min= 168, max= 192, avg=186.80, stdev= 7.90, samples=20 00:48:04.167 lat (usec) : 750=30.77%, 1000=19.02% 00:48:04.167 lat (msec) : 50=50.21% 00:48:04.167 cpu : usr=89.16%, sys=10.23%, ctx=45, majf=0, minf=260 00:48:04.167 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:04.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:04.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:04.167 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:04.167 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:04.167 00:48:04.167 Run status group 0 (all jobs): 00:48:04.167 READ: bw=747KiB/s (765kB/s), 747KiB/s-747KiB/s (765kB/s-765kB/s), io=7488KiB (7668kB), run=10019-10019msec 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 00:48:04.167 real 0m11.129s 00:48:04.167 user 0m10.044s 00:48:04.167 sys 0m1.302s 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 ************************************ 00:48:04.167 END TEST fio_dif_1_default 00:48:04.167 ************************************ 00:48:04.167 17:01:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:48:04.167 17:01:22 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:48:04.167 17:01:22 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 ************************************ 00:48:04.167 START TEST fio_dif_1_multi_subsystems 00:48:04.167 ************************************ 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 bdev_null0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 [2024-07-22 17:01:22.257562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 bdev_null1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:04.167 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:04.168 { 00:48:04.168 "params": { 00:48:04.168 "name": "Nvme$subsystem", 00:48:04.168 "trtype": "$TEST_TRANSPORT", 00:48:04.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:04.168 "adrfam": "ipv4", 00:48:04.168 "trsvcid": "$NVMF_PORT", 00:48:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:04.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:04.168 "hdgst": ${hdgst:-false}, 00:48:04.168 "ddgst": ${ddgst:-false} 00:48:04.168 }, 00:48:04.168 "method": "bdev_nvme_attach_controller" 00:48:04.168 } 00:48:04.168 EOF 00:48:04.168 )") 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:04.168 { 00:48:04.168 "params": { 00:48:04.168 "name": "Nvme$subsystem", 00:48:04.168 "trtype": "$TEST_TRANSPORT", 00:48:04.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:04.168 "adrfam": "ipv4", 00:48:04.168 "trsvcid": "$NVMF_PORT", 00:48:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:04.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:04.168 "hdgst": ${hdgst:-false}, 00:48:04.168 "ddgst": ${ddgst:-false} 00:48:04.168 }, 00:48:04.168 "method": "bdev_nvme_attach_controller" 00:48:04.168 } 00:48:04.168 EOF 00:48:04.168 )") 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:48:04.168 "params": { 00:48:04.168 "name": "Nvme0", 00:48:04.168 "trtype": "tcp", 00:48:04.168 "traddr": "10.0.0.2", 00:48:04.168 "adrfam": "ipv4", 00:48:04.168 "trsvcid": "4420", 00:48:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:04.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:04.168 "hdgst": false, 00:48:04.168 "ddgst": false 00:48:04.168 }, 00:48:04.168 "method": "bdev_nvme_attach_controller" 00:48:04.168 },{ 00:48:04.168 "params": { 00:48:04.168 "name": "Nvme1", 00:48:04.168 "trtype": "tcp", 00:48:04.168 "traddr": "10.0.0.2", 00:48:04.168 "adrfam": "ipv4", 00:48:04.168 "trsvcid": "4420", 00:48:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:04.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:04.168 "hdgst": false, 00:48:04.168 "ddgst": false 00:48:04.168 }, 00:48:04.168 "method": "bdev_nvme_attach_controller" 00:48:04.168 }' 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:04.168 17:01:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:04.168 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:04.168 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:04.168 fio-3.35 00:48:04.168 Starting 2 threads 00:48:04.168 EAL: No free 2048 kB hugepages reported on node 1 00:48:14.135 00:48:14.135 filename0: (groupid=0, jobs=1): err= 0: pid=2979982: Mon Jul 22 17:01:33 2024 00:48:14.135 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10026msec) 00:48:14.135 slat (nsec): min=6659, max=53032, avg=10115.36, stdev=4958.65 00:48:14.135 clat (usec): min=615, max=46519, avg=21074.95, stdev=20282.95 00:48:14.135 lat (usec): min=622, max=46531, avg=21085.07, stdev=20282.81 00:48:14.135 clat percentiles (usec): 00:48:14.135 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 709], 00:48:14.135 | 30.00th=[ 758], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:48:14.135 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:48:14.135 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:48:14.135 | 99.99th=[46400] 00:48:14.135 bw ( KiB/s): min= 704, max= 768, per=66.17%, avg=758.40, stdev=23.45, samples=20 00:48:14.135 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:48:14.135 lat (usec) : 750=29.16%, 1000=20.16% 00:48:14.135 lat (msec) : 2=0.58%, 50=50.11% 00:48:14.135 cpu : usr=94.00%, sys=5.72%, ctx=14, majf=0, minf=166 00:48:14.135 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:14.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:14.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:14.135 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:14.135 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:14.135 filename1: (groupid=0, jobs=1): err= 0: pid=2979983: Mon Jul 22 17:01:33 2024 00:48:14.135 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10042msec) 00:48:14.135 slat (nsec): min=6577, max=30816, avg=10338.43, stdev=4985.33 00:48:14.135 clat (usec): min=40786, max=46512, avg=41122.02, stdev=482.55 00:48:14.135 lat (usec): min=40794, max=46529, avg=41132.36, stdev=482.60 00:48:14.135 clat percentiles (usec): 00:48:14.135 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:48:14.135 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:48:14.135 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:48:14.135 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:48:14.135 | 99.99th=[46400] 00:48:14.135 bw ( KiB/s): min= 352, max= 416, per=33.87%, avg=388.80, stdev=15.66, samples=20 00:48:14.135 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:48:14.135 lat (msec) : 50=100.00% 00:48:14.135 cpu : usr=94.13%, sys=5.60%, ctx=11, majf=0, minf=83 00:48:14.135 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:14.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:14.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:14.135 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:14.135 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:14.135 00:48:14.135 Run status group 0 (all jobs): 00:48:14.135 READ: bw=1146KiB/s (1173kB/s), 389KiB/s-758KiB/s (398kB/s-776kB/s), io=11.2MiB (11.8MB), run=10026-10042msec 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.135 00:48:14.135 real 0m11.368s 00:48:14.135 user 0m20.354s 00:48:14.135 sys 0m1.423s 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 ************************************ 00:48:14.135 END TEST fio_dif_1_multi_subsystems 00:48:14.135 ************************************ 00:48:14.135 17:01:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:48:14.135 17:01:33 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:48:14.135 17:01:33 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:14.135 17:01:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:14.135 ************************************ 00:48:14.135 START TEST fio_dif_rand_params 00:48:14.135 ************************************ 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:14.136 bdev_null0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:14.136 [2024-07-22 17:01:33.679116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:14.136 { 00:48:14.136 "params": { 00:48:14.136 "name": "Nvme$subsystem", 00:48:14.136 "trtype": "$TEST_TRANSPORT", 00:48:14.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:14.136 "adrfam": "ipv4", 00:48:14.136 "trsvcid": "$NVMF_PORT", 00:48:14.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:14.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:14.136 "hdgst": ${hdgst:-false}, 00:48:14.136 "ddgst": ${ddgst:-false} 00:48:14.136 }, 00:48:14.136 "method": "bdev_nvme_attach_controller" 00:48:14.136 } 00:48:14.136 EOF 00:48:14.136 )") 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:48:14.136 "params": { 00:48:14.136 "name": "Nvme0", 00:48:14.136 "trtype": "tcp", 00:48:14.136 "traddr": "10.0.0.2", 00:48:14.136 "adrfam": "ipv4", 00:48:14.136 "trsvcid": "4420", 00:48:14.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:14.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:14.136 "hdgst": false, 00:48:14.136 "ddgst": false 00:48:14.136 }, 00:48:14.136 "method": "bdev_nvme_attach_controller" 00:48:14.136 }' 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:14.136 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:14.137 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:14.137 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:14.137 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:14.137 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:14.137 17:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:14.395 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:14.395 ... 00:48:14.395 fio-3.35 00:48:14.395 Starting 3 threads 00:48:14.395 EAL: No free 2048 kB hugepages reported on node 1 00:48:20.951 00:48:20.951 filename0: (groupid=0, jobs=1): err= 0: pid=2981388: Mon Jul 22 17:01:39 2024 00:48:20.951 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(142MiB/5003msec) 00:48:20.951 slat (nsec): min=5014, max=44400, avg=15504.74, stdev=5345.13 00:48:20.951 clat (usec): min=4812, max=88957, avg=13236.90, stdev=11140.47 00:48:20.951 lat (usec): min=4825, max=88971, avg=13252.40, stdev=11140.31 00:48:20.951 clat percentiles (usec): 00:48:20.951 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 8160], 00:48:20.951 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11600], 00:48:20.951 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14877], 95.00th=[49546], 00:48:20.951 | 99.00th=[53216], 99.50th=[53740], 99.90th=[88605], 99.95th=[88605], 00:48:20.951 | 99.99th=[88605] 00:48:20.951 bw ( KiB/s): min=17920, max=35840, per=36.42%, avg=28928.00, stdev=5876.24, samples=10 00:48:20.951 iops : min= 140, max= 280, avg=226.00, stdev=45.91, samples=10 00:48:20.951 lat (msec) : 10=44.61%, 20=48.23%, 50=3.18%, 100=3.98% 00:48:20.951 cpu : usr=91.78%, sys=7.74%, ctx=9, majf=0, minf=105 00:48:20.951 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:20.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.951 issued rwts: total=1132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:20.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:20.951 filename0: (groupid=0, jobs=1): err= 0: pid=2981389: Mon Jul 22 17:01:39 2024 00:48:20.951 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(122MiB/5045msec) 00:48:20.951 slat (usec): min=5, max=105, avg=16.36, stdev= 6.88 00:48:20.952 clat (usec): min=4862, max=93204, avg=15409.54, stdev=13364.07 00:48:20.952 lat (usec): min=4875, max=93219, avg=15425.90, stdev=13364.14 00:48:20.952 clat percentiles (usec): 00:48:20.952 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 8586], 00:48:20.952 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11600], 60.00th=[12518], 00:48:20.952 | 70.00th=[13304], 80.00th=[15008], 90.00th=[47973], 95.00th=[51643], 00:48:20.952 | 99.00th=[54789], 99.50th=[58983], 99.90th=[92799], 99.95th=[92799], 00:48:20.952 | 99.99th=[92799] 00:48:20.952 bw ( KiB/s): min=19200, max=36096, per=31.43%, avg=24964.70, stdev=5781.85, samples=10 00:48:20.952 iops : min= 150, max= 282, avg=195.00, stdev=45.18, samples=10 00:48:20.952 lat (msec) : 10=37.83%, 20=51.12%, 50=3.58%, 100=7.46% 00:48:20.952 cpu : usr=92.68%, sys=6.82%, ctx=9, majf=0, minf=111 00:48:20.952 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:20.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.952 issued rwts: total=978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:20.952 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:20.952 filename0: (groupid=0, jobs=1): err= 0: pid=2981390: Mon Jul 22 17:01:39 2024 00:48:20.952 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(128MiB/5015msec) 00:48:20.952 slat (usec): min=4, max=105, avg=17.11, stdev= 7.35 00:48:20.952 clat (usec): min=4800, max=92999, avg=14712.10, stdev=12777.39 00:48:20.952 lat (usec): min=4812, max=93007, avg=14729.21, stdev=12777.44 00:48:20.952 clat percentiles (usec): 00:48:20.952 | 1.00th=[ 5276], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 8586], 00:48:20.952 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11338], 60.00th=[12256], 00:48:20.952 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17433], 95.00th=[50594], 00:48:20.952 | 99.00th=[55837], 99.50th=[57410], 99.90th=[92799], 99.95th=[92799], 00:48:20.952 | 99.99th=[92799] 00:48:20.952 bw ( KiB/s): min=19968, max=33792, per=32.81%, avg=26060.80, stdev=4981.30, samples=10 00:48:20.952 iops : min= 156, max= 264, avg=203.60, stdev=38.92, samples=10 00:48:20.952 lat (msec) : 10=37.41%, 20=53.28%, 50=3.13%, 100=6.17% 00:48:20.952 cpu : usr=90.67%, sys=7.86%, ctx=167, majf=0, minf=69 00:48:20.952 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:20.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:20.952 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:20.952 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:20.952 00:48:20.952 Run status group 0 (all jobs): 00:48:20.952 READ: bw=77.6MiB/s (81.3MB/s), 24.2MiB/s-28.3MiB/s (25.4MB/s-29.7MB/s), io=391MiB (410MB), run=5003-5045msec 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 bdev_null0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 [2024-07-22 17:01:39.851634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 bdev_null1 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 bdev_null2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.952 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:20.953 { 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme$subsystem", 00:48:20.953 "trtype": "$TEST_TRANSPORT", 00:48:20.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "$NVMF_PORT", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:20.953 "hdgst": ${hdgst:-false}, 00:48:20.953 "ddgst": ${ddgst:-false} 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 } 00:48:20.953 EOF 00:48:20.953 )") 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:20.953 { 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme$subsystem", 00:48:20.953 "trtype": "$TEST_TRANSPORT", 00:48:20.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "$NVMF_PORT", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:20.953 "hdgst": ${hdgst:-false}, 00:48:20.953 "ddgst": ${ddgst:-false} 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 } 00:48:20.953 EOF 00:48:20.953 )") 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:20.953 { 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme$subsystem", 00:48:20.953 "trtype": "$TEST_TRANSPORT", 00:48:20.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "$NVMF_PORT", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:20.953 "hdgst": ${hdgst:-false}, 00:48:20.953 "ddgst": ${ddgst:-false} 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 } 00:48:20.953 EOF 00:48:20.953 )") 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme0", 00:48:20.953 "trtype": "tcp", 00:48:20.953 "traddr": "10.0.0.2", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "4420", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:20.953 "hdgst": false, 00:48:20.953 "ddgst": false 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 },{ 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme1", 00:48:20.953 "trtype": "tcp", 00:48:20.953 "traddr": "10.0.0.2", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "4420", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:20.953 "hdgst": false, 00:48:20.953 "ddgst": false 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 },{ 00:48:20.953 "params": { 00:48:20.953 "name": "Nvme2", 00:48:20.953 "trtype": "tcp", 00:48:20.953 "traddr": "10.0.0.2", 00:48:20.953 "adrfam": "ipv4", 00:48:20.953 "trsvcid": "4420", 00:48:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:20.953 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:20.953 "hdgst": false, 00:48:20.953 "ddgst": false 00:48:20.953 }, 00:48:20.953 "method": "bdev_nvme_attach_controller" 00:48:20.953 }' 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:20.953 17:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:20.953 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:20.953 ... 00:48:20.953 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:20.953 ... 00:48:20.953 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:20.953 ... 00:48:20.953 fio-3.35 00:48:20.953 Starting 24 threads 00:48:20.953 EAL: No free 2048 kB hugepages reported on node 1 00:48:33.154 00:48:33.154 filename0: (groupid=0, jobs=1): err= 0: pid=2982212: Mon Jul 22 17:01:51 2024 00:48:33.154 read: IOPS=434, BW=1737KiB/s (1778kB/s)(17.0MiB/10024msec) 00:48:33.154 slat (usec): min=8, max=154, avg=33.12, stdev=18.80 00:48:33.154 clat (msec): min=22, max=346, avg=36.57, stdev=26.30 00:48:33.154 lat (msec): min=22, max=346, avg=36.61, stdev=26.30 00:48:33.154 clat percentiles (msec): 00:48:33.154 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.154 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:48:33.154 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:48:33.154 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.154 | 99.99th=[ 347] 00:48:33.154 bw ( KiB/s): min= 368, max= 1920, per=4.17%, avg=1734.40, stdev=478.01, samples=20 00:48:33.154 iops : min= 92, max= 480, avg=433.60, stdev=119.50, samples=20 00:48:33.154 lat (msec) : 50=98.16%, 100=0.05%, 250=1.06%, 500=0.74% 00:48:33.154 cpu : usr=96.85%, sys=2.19%, ctx=88, majf=0, minf=30 00:48:33.154 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:48:33.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.154 filename0: (groupid=0, jobs=1): err= 0: pid=2982213: Mon Jul 22 17:01:51 2024 00:48:33.154 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10006msec) 00:48:33.154 slat (usec): min=8, max=136, avg=55.52, stdev=19.03 00:48:33.154 clat (msec): min=27, max=479, avg=36.54, stdev=32.15 00:48:33.154 lat (msec): min=27, max=479, avg=36.60, stdev=32.15 00:48:33.154 clat percentiles (msec): 00:48:33.154 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.154 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.154 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.154 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.154 | 99.99th=[ 481] 00:48:33.154 bw ( KiB/s): min= 256, max= 1923, per=4.11%, avg=1711.32, stdev=526.46, samples=19 00:48:33.154 iops : min= 64, max= 480, avg=427.79, stdev=131.60, samples=19 00:48:33.154 lat (msec) : 50=98.52%, 250=0.74%, 500=0.74% 00:48:33.154 cpu : usr=97.28%, sys=1.99%, ctx=36, majf=0, minf=20 00:48:33.154 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:33.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.154 filename0: (groupid=0, jobs=1): err= 0: pid=2982214: Mon Jul 22 17:01:51 2024 00:48:33.154 read: IOPS=430, BW=1723KiB/s (1765kB/s)(16.9MiB/10027msec) 00:48:33.154 slat (usec): min=8, max=193, avg=47.59, stdev=35.00 00:48:33.154 clat (msec): min=27, max=395, avg=36.63, stdev=32.67 00:48:33.154 lat (msec): min=27, max=395, avg=36.67, stdev=32.67 00:48:33.154 clat percentiles (msec): 00:48:33.154 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.154 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.154 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.154 | 99.00th=[ 259], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 397], 00:48:33.154 | 99.99th=[ 397] 00:48:33.154 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1721.60, stdev=493.96, samples=20 00:48:33.154 iops : min= 64, max= 480, avg=430.40, stdev=123.49, samples=20 00:48:33.154 lat (msec) : 50=98.15%, 100=0.37%, 250=0.37%, 500=1.11% 00:48:33.154 cpu : usr=97.45%, sys=1.75%, ctx=41, majf=0, minf=15 00:48:33.154 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:33.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.154 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.154 filename0: (groupid=0, jobs=1): err= 0: pid=2982215: Mon Jul 22 17:01:51 2024 00:48:33.154 read: IOPS=434, BW=1739KiB/s (1780kB/s)(17.0MiB/10027msec) 00:48:33.154 slat (usec): min=8, max=117, avg=43.34, stdev=20.45 00:48:33.154 clat (msec): min=18, max=295, avg=36.46, stdev=26.44 00:48:33.154 lat (msec): min=18, max=295, avg=36.51, stdev=26.43 00:48:33.154 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 215], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:48:33.155 | 99.99th=[ 296] 00:48:33.155 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1736.80, stdev=471.60, samples=20 00:48:33.155 iops : min= 64, max= 480, avg=434.20, stdev=117.90, samples=20 00:48:33.155 lat (msec) : 20=0.37%, 50=97.43%, 100=0.37%, 250=0.92%, 500=0.92% 00:48:33.155 cpu : usr=97.11%, sys=1.88%, ctx=46, majf=0, minf=21 00:48:33.155 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename0: (groupid=0, jobs=1): err= 0: pid=2982217: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10009msec) 00:48:33.155 slat (usec): min=8, max=162, avg=58.77, stdev=21.61 00:48:33.155 clat (msec): min=27, max=266, avg=36.41, stdev=26.14 00:48:33.155 lat (msec): min=27, max=266, avg=36.47, stdev=26.13 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.155 | 99.99th=[ 266] 00:48:33.155 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1717.89, stdev=507.21, samples=19 00:48:33.155 iops : min= 64, max= 480, avg=429.47, stdev=126.80, samples=19 00:48:33.155 lat (msec) : 50=97.79%, 100=0.37%, 250=1.11%, 500=0.74% 00:48:33.155 cpu : usr=97.49%, sys=1.73%, ctx=40, majf=0, minf=19 00:48:33.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename0: (groupid=0, jobs=1): err= 0: pid=2982218: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=434, BW=1737KiB/s (1778kB/s)(17.0MiB/10024msec) 00:48:33.155 slat (usec): min=8, max=135, avg=42.35, stdev=21.58 00:48:33.155 clat (msec): min=27, max=360, avg=36.53, stdev=26.36 00:48:33.155 lat (msec): min=27, max=360, avg=36.57, stdev=26.36 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.155 | 99.99th=[ 359] 00:48:33.155 bw ( KiB/s): min= 368, max= 1920, per=4.17%, avg=1734.40, stdev=478.01, samples=20 00:48:33.155 iops : min= 92, max= 480, avg=433.60, stdev=119.50, samples=20 00:48:33.155 lat (msec) : 50=98.16%, 100=0.05%, 250=1.10%, 500=0.69% 00:48:33.155 cpu : usr=95.65%, sys=2.64%, ctx=274, majf=0, minf=24 00:48:33.155 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename0: (groupid=0, jobs=1): err= 0: pid=2982219: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=429, BW=1719KiB/s (1761kB/s)(16.8MiB/10004msec) 00:48:33.155 slat (usec): min=8, max=148, avg=54.66, stdev=24.00 00:48:33.155 clat (msec): min=12, max=455, avg=36.74, stdev=34.53 00:48:33.155 lat (msec): min=12, max=455, avg=36.79, stdev=34.52 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 439], 00:48:33.155 | 99.99th=[ 456] 00:48:33.155 bw ( KiB/s): min= 128, max= 2048, per=4.09%, avg=1702.89, stdev=542.70, samples=19 00:48:33.155 iops : min= 32, max= 512, avg=425.68, stdev=135.67, samples=19 00:48:33.155 lat (msec) : 20=0.30%, 50=97.65%, 100=0.93%, 250=0.05%, 500=1.07% 00:48:33.155 cpu : usr=97.29%, sys=1.87%, ctx=80, majf=0, minf=23 00:48:33.155 IO depths : 1=5.1%, 2=11.2%, 4=24.6%, 8=51.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename0: (groupid=0, jobs=1): err= 0: pid=2982220: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=452, BW=1811KiB/s (1854kB/s)(17.7MiB/10013msec) 00:48:33.155 slat (usec): min=6, max=163, avg=24.96, stdev=20.19 00:48:33.155 clat (msec): min=4, max=342, avg=35.13, stdev=28.66 00:48:33.155 lat (msec): min=4, max=342, avg=35.16, stdev=28.66 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 6], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 32], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 313], 00:48:33.155 | 99.99th=[ 342] 00:48:33.155 bw ( KiB/s): min= 256, max= 2504, per=4.34%, avg=1806.80, stdev=556.23, samples=20 00:48:33.155 iops : min= 64, max= 626, avg=451.70, stdev=139.06, samples=20 00:48:33.155 lat (msec) : 10=3.24%, 20=1.54%, 50=93.49%, 100=0.31%, 250=0.35% 00:48:33.155 lat (msec) : 500=1.06% 00:48:33.155 cpu : usr=97.03%, sys=1.98%, ctx=166, majf=0, minf=22 00:48:33.155 IO depths : 1=3.4%, 2=8.5%, 4=21.1%, 8=57.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename1: (groupid=0, jobs=1): err= 0: pid=2982221: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=434, BW=1738KiB/s (1780kB/s)(17.0MiB/10014msec) 00:48:33.155 slat (usec): min=7, max=163, avg=31.96, stdev=19.53 00:48:33.155 clat (msec): min=11, max=275, avg=36.54, stdev=26.24 00:48:33.155 lat (msec): min=11, max=275, avg=36.57, stdev=26.24 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:48:33.155 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 271], 00:48:33.155 | 99.99th=[ 275] 00:48:33.155 bw ( KiB/s): min= 368, max= 2048, per=4.17%, avg=1734.40, stdev=486.73, samples=20 00:48:33.155 iops : min= 92, max= 512, avg=433.60, stdev=121.68, samples=20 00:48:33.155 lat (msec) : 20=0.37%, 50=97.79%, 100=0.05%, 250=1.06%, 500=0.74% 00:48:33.155 cpu : usr=96.84%, sys=2.11%, ctx=49, majf=0, minf=33 00:48:33.155 IO depths : 1=5.0%, 2=10.8%, 4=23.3%, 8=53.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename1: (groupid=0, jobs=1): err= 0: pid=2982222: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=431, BW=1727KiB/s (1769kB/s)(16.9MiB/10004msec) 00:48:33.155 slat (usec): min=8, max=150, avg=54.46, stdev=21.64 00:48:33.155 clat (msec): min=17, max=392, avg=36.53, stdev=31.88 00:48:33.155 lat (msec): min=17, max=392, avg=36.58, stdev=31.88 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.155 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.155 | 99.99th=[ 393] 00:48:33.155 bw ( KiB/s): min= 256, max= 1923, per=4.11%, avg=1711.32, stdev=526.46, samples=19 00:48:33.155 iops : min= 64, max= 480, avg=427.79, stdev=131.60, samples=19 00:48:33.155 lat (msec) : 20=0.21%, 50=98.26%, 100=0.05%, 250=0.69%, 500=0.79% 00:48:33.155 cpu : usr=97.08%, sys=1.92%, ctx=87, majf=0, minf=29 00:48:33.155 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.155 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.155 filename1: (groupid=0, jobs=1): err= 0: pid=2982223: Mon Jul 22 17:01:51 2024 00:48:33.155 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10027msec) 00:48:33.155 slat (usec): min=8, max=103, avg=43.03, stdev=18.34 00:48:33.155 clat (msec): min=26, max=267, avg=36.51, stdev=25.73 00:48:33.155 lat (msec): min=26, max=267, avg=36.55, stdev=25.73 00:48:33.155 clat percentiles (msec): 00:48:33.155 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.155 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.155 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.155 | 99.00th=[ 215], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:48:33.155 | 99.99th=[ 268] 00:48:33.155 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1734.40, stdev=472.54, samples=20 00:48:33.155 iops : min= 64, max= 480, avg=433.60, stdev=118.14, samples=20 00:48:33.155 lat (msec) : 50=97.79%, 100=0.37%, 250=1.10%, 500=0.74% 00:48:33.155 cpu : usr=97.22%, sys=1.89%, ctx=154, majf=0, minf=21 00:48:33.155 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename1: (groupid=0, jobs=1): err= 0: pid=2982225: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=431, BW=1727KiB/s (1768kB/s)(16.9MiB/10007msec) 00:48:33.156 slat (usec): min=8, max=143, avg=54.48, stdev=19.93 00:48:33.156 clat (msec): min=24, max=392, avg=36.57, stdev=32.00 00:48:33.156 lat (msec): min=24, max=392, avg=36.63, stdev=32.00 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.156 | 99.99th=[ 393] 00:48:33.156 bw ( KiB/s): min= 256, max= 1936, per=4.11%, avg=1711.16, stdev=526.42, samples=19 00:48:33.156 iops : min= 64, max= 484, avg=427.79, stdev=131.61, samples=19 00:48:33.156 lat (msec) : 50=98.52%, 250=0.74%, 500=0.74% 00:48:33.156 cpu : usr=96.46%, sys=2.21%, ctx=187, majf=0, minf=22 00:48:33.156 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename1: (groupid=0, jobs=1): err= 0: pid=2982226: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10026msec) 00:48:33.156 slat (usec): min=8, max=196, avg=36.04, stdev=23.83 00:48:33.156 clat (msec): min=19, max=313, avg=36.59, stdev=25.90 00:48:33.156 lat (msec): min=19, max=313, avg=36.63, stdev=25.90 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 215], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:48:33.156 | 99.99th=[ 313] 00:48:33.156 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1734.40, stdev=472.54, samples=20 00:48:33.156 iops : min= 64, max= 480, avg=433.60, stdev=118.14, samples=20 00:48:33.156 lat (msec) : 20=0.05%, 50=97.43%, 100=0.69%, 250=1.10%, 500=0.74% 00:48:33.156 cpu : usr=95.12%, sys=2.92%, ctx=132, majf=0, minf=22 00:48:33.156 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename1: (groupid=0, jobs=1): err= 0: pid=2982227: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10026msec) 00:48:33.156 slat (usec): min=8, max=413, avg=46.98, stdev=32.18 00:48:33.156 clat (msec): min=27, max=265, avg=36.41, stdev=26.05 00:48:33.156 lat (msec): min=27, max=265, avg=36.45, stdev=26.05 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:48:33.156 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.156 | 99.99th=[ 266] 00:48:33.156 bw ( KiB/s): min= 384, max= 1920, per=4.17%, avg=1734.40, stdev=476.18, samples=20 00:48:33.156 iops : min= 96, max= 480, avg=433.60, stdev=119.04, samples=20 00:48:33.156 lat (msec) : 50=98.16%, 250=1.10%, 500=0.74% 00:48:33.156 cpu : usr=95.01%, sys=2.87%, ctx=141, majf=0, minf=28 00:48:33.156 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename1: (groupid=0, jobs=1): err= 0: pid=2982228: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=432, BW=1732KiB/s (1773kB/s)(16.9MiB/10016msec) 00:48:33.156 slat (usec): min=8, max=224, avg=54.26, stdev=25.25 00:48:33.156 clat (msec): min=21, max=365, avg=36.45, stdev=26.50 00:48:33.156 lat (msec): min=21, max=365, avg=36.50, stdev=26.50 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.156 | 99.99th=[ 368] 00:48:33.156 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1727.00, stdev=498.85, samples=20 00:48:33.156 iops : min= 64, max= 512, avg=431.75, stdev=124.71, samples=20 00:48:33.156 lat (msec) : 50=97.79%, 100=0.42%, 250=1.11%, 500=0.69% 00:48:33.156 cpu : usr=97.89%, sys=1.67%, ctx=25, majf=0, minf=27 00:48:33.156 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename1: (groupid=0, jobs=1): err= 0: pid=2982229: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10026msec) 00:48:33.156 slat (usec): min=8, max=1064, avg=29.82, stdev=22.05 00:48:33.156 clat (msec): min=27, max=266, avg=36.61, stdev=26.04 00:48:33.156 lat (msec): min=27, max=266, avg=36.64, stdev=26.04 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.156 | 99.99th=[ 266] 00:48:33.156 bw ( KiB/s): min= 384, max= 1920, per=4.17%, avg=1734.40, stdev=476.18, samples=20 00:48:33.156 iops : min= 96, max= 480, avg=433.60, stdev=119.04, samples=20 00:48:33.156 lat (msec) : 50=98.16%, 250=1.10%, 500=0.74% 00:48:33.156 cpu : usr=97.79%, sys=1.76%, ctx=28, majf=0, minf=25 00:48:33.156 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename2: (groupid=0, jobs=1): err= 0: pid=2982230: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=430, BW=1723KiB/s (1764kB/s)(16.8MiB/10005msec) 00:48:33.156 slat (usec): min=9, max=120, avg=52.10, stdev=15.80 00:48:33.156 clat (msec): min=23, max=481, avg=36.66, stdev=32.50 00:48:33.156 lat (msec): min=23, max=481, avg=36.72, stdev=32.50 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.156 | 99.99th=[ 481] 00:48:33.156 bw ( KiB/s): min= 240, max= 1920, per=4.10%, avg=1706.95, stdev=526.92, samples=19 00:48:33.156 iops : min= 60, max= 480, avg=426.74, stdev=131.73, samples=19 00:48:33.156 lat (msec) : 50=98.00%, 100=0.56%, 250=0.65%, 500=0.79% 00:48:33.156 cpu : usr=98.05%, sys=1.45%, ctx=43, majf=0, minf=23 00:48:33.156 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename2: (groupid=0, jobs=1): err= 0: pid=2982231: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10027msec) 00:48:33.156 slat (usec): min=8, max=145, avg=40.21, stdev=22.94 00:48:33.156 clat (msec): min=27, max=267, avg=36.56, stdev=25.72 00:48:33.156 lat (msec): min=27, max=267, avg=36.60, stdev=25.72 00:48:33.156 clat percentiles (msec): 00:48:33.156 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.156 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.156 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.156 | 99.00th=[ 215], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:48:33.156 | 99.99th=[ 268] 00:48:33.156 bw ( KiB/s): min= 256, max= 1920, per=4.17%, avg=1734.40, stdev=472.54, samples=20 00:48:33.156 iops : min= 64, max= 480, avg=433.60, stdev=118.14, samples=20 00:48:33.156 lat (msec) : 50=97.43%, 100=0.74%, 250=1.10%, 500=0.74% 00:48:33.156 cpu : usr=96.60%, sys=2.35%, ctx=194, majf=0, minf=39 00:48:33.156 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:48:33.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.156 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.156 filename2: (groupid=0, jobs=1): err= 0: pid=2982233: Mon Jul 22 17:01:51 2024 00:48:33.156 read: IOPS=434, BW=1736KiB/s (1778kB/s)(17.0MiB/10026msec) 00:48:33.156 slat (usec): min=8, max=221, avg=47.63, stdev=31.13 00:48:33.156 clat (msec): min=26, max=304, avg=36.39, stdev=26.31 00:48:33.156 lat (msec): min=26, max=304, avg=36.44, stdev=26.31 00:48:33.156 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.157 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.157 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 268], 00:48:33.157 | 99.99th=[ 305] 00:48:33.157 bw ( KiB/s): min= 368, max= 1920, per=4.17%, avg=1734.40, stdev=478.01, samples=20 00:48:33.157 iops : min= 92, max= 480, avg=433.60, stdev=119.50, samples=20 00:48:33.157 lat (msec) : 50=98.16%, 100=0.05%, 250=1.06%, 500=0.74% 00:48:33.157 cpu : usr=96.65%, sys=2.18%, ctx=159, majf=0, minf=20 00:48:33.157 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 filename2: (groupid=0, jobs=1): err= 0: pid=2982234: Mon Jul 22 17:01:51 2024 00:48:33.157 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10009msec) 00:48:33.157 slat (usec): min=5, max=117, avg=33.30, stdev=21.15 00:48:33.157 clat (msec): min=28, max=356, avg=36.65, stdev=26.40 00:48:33.157 lat (msec): min=28, max=356, avg=36.69, stdev=26.40 00:48:33.157 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:48:33.157 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:48:33.157 | 99.00th=[ 215], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.157 | 99.99th=[ 359] 00:48:33.157 bw ( KiB/s): min= 368, max= 1920, per=4.15%, avg=1728.00, stdev=483.22, samples=20 00:48:33.157 iops : min= 92, max= 480, avg=432.00, stdev=120.80, samples=20 00:48:33.157 lat (msec) : 50=97.79%, 100=0.42%, 250=1.06%, 500=0.74% 00:48:33.157 cpu : usr=93.69%, sys=3.53%, ctx=201, majf=0, minf=25 00:48:33.157 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 filename2: (groupid=0, jobs=1): err= 0: pid=2982235: Mon Jul 22 17:01:51 2024 00:48:33.157 read: IOPS=434, BW=1738KiB/s (1780kB/s)(17.0MiB/10014msec) 00:48:33.157 slat (usec): min=7, max=197, avg=54.20, stdev=36.23 00:48:33.157 clat (msec): min=12, max=265, avg=36.28, stdev=26.07 00:48:33.157 lat (msec): min=12, max=265, avg=36.33, stdev=26.07 00:48:33.157 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.157 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.157 | 99.00th=[ 213], 99.50th=[ 264], 99.90th=[ 266], 99.95th=[ 266], 00:48:33.157 | 99.99th=[ 266] 00:48:33.157 bw ( KiB/s): min= 384, max= 2048, per=4.17%, avg=1734.40, stdev=485.15, samples=20 00:48:33.157 iops : min= 96, max= 512, avg=433.60, stdev=121.29, samples=20 00:48:33.157 lat (msec) : 20=0.37%, 50=97.79%, 250=1.10%, 500=0.74% 00:48:33.157 cpu : usr=94.17%, sys=3.17%, ctx=148, majf=0, minf=33 00:48:33.157 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 filename2: (groupid=0, jobs=1): err= 0: pid=2982236: Mon Jul 22 17:01:51 2024 00:48:33.157 read: IOPS=431, BW=1726KiB/s (1768kB/s)(16.9MiB/10011msec) 00:48:33.157 slat (usec): min=12, max=205, avg=55.29, stdev=28.32 00:48:33.157 clat (msec): min=23, max=392, avg=36.52, stdev=32.01 00:48:33.157 lat (msec): min=23, max=392, avg=36.58, stdev=32.01 00:48:33.157 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.157 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.157 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.157 | 99.99th=[ 393] 00:48:33.157 bw ( KiB/s): min= 256, max= 2048, per=4.14%, avg=1721.60, stdev=516.15, samples=20 00:48:33.157 iops : min= 64, max= 512, avg=430.40, stdev=129.04, samples=20 00:48:33.157 lat (msec) : 50=98.15%, 100=0.37%, 250=0.74%, 500=0.74% 00:48:33.157 cpu : usr=97.57%, sys=1.69%, ctx=78, majf=0, minf=20 00:48:33.157 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 filename2: (groupid=0, jobs=1): err= 0: pid=2982237: Mon Jul 22 17:01:51 2024 00:48:33.157 read: IOPS=430, BW=1723KiB/s (1764kB/s)(16.8MiB/10005msec) 00:48:33.157 slat (usec): min=8, max=131, avg=49.92, stdev=22.19 00:48:33.157 clat (msec): min=11, max=392, avg=36.68, stdev=32.10 00:48:33.157 lat (msec): min=11, max=392, avg=36.73, stdev=32.10 00:48:33.157 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:48:33.157 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.157 | 99.00th=[ 197], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:48:33.157 | 99.99th=[ 393] 00:48:33.157 bw ( KiB/s): min= 256, max= 2032, per=4.10%, avg=1706.95, stdev=526.36, samples=19 00:48:33.157 iops : min= 64, max= 508, avg=426.74, stdev=131.59, samples=19 00:48:33.157 lat (msec) : 20=0.35%, 50=97.22%, 100=0.95%, 250=0.74%, 500=0.74% 00:48:33.157 cpu : usr=93.97%, sys=3.31%, ctx=350, majf=0, minf=27 00:48:33.157 IO depths : 1=5.0%, 2=11.1%, 4=24.5%, 8=51.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 filename2: (groupid=0, jobs=1): err= 0: pid=2982238: Mon Jul 22 17:01:51 2024 00:48:33.157 read: IOPS=434, BW=1740KiB/s (1782kB/s)(17.0MiB/10019msec) 00:48:33.157 slat (usec): min=8, max=130, avg=44.60, stdev=23.59 00:48:33.157 clat (msec): min=18, max=329, avg=36.38, stdev=26.45 00:48:33.157 lat (msec): min=18, max=329, avg=36.43, stdev=26.45 00:48:33.157 clat percentiles (msec): 00:48:33.157 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:48:33.157 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:48:33.157 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:48:33.157 | 99.00th=[ 213], 99.50th=[ 268], 99.90th=[ 330], 99.95th=[ 330], 00:48:33.157 | 99.99th=[ 330] 00:48:33.157 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1736.80, stdev=493.06, samples=20 00:48:33.157 iops : min= 64, max= 512, avg=434.20, stdev=123.26, samples=20 00:48:33.157 lat (msec) : 20=0.46%, 50=97.20%, 100=0.37%, 250=1.24%, 500=0.73% 00:48:33.157 cpu : usr=94.86%, sys=2.85%, ctx=184, majf=0, minf=29 00:48:33.157 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:33.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:33.157 issued rwts: total=4358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:33.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:33.157 00:48:33.157 Run status group 0 (all jobs): 00:48:33.157 READ: bw=40.6MiB/s (42.6MB/s), 1719KiB/s-1811KiB/s (1761kB/s-1854kB/s), io=407MiB (427MB), run=10004-10027msec 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:48:33.157 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 bdev_null0 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 [2024-07-22 17:01:51.581113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 bdev_null1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:33.158 { 00:48:33.158 "params": { 00:48:33.158 "name": "Nvme$subsystem", 00:48:33.158 "trtype": "$TEST_TRANSPORT", 00:48:33.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:33.158 "adrfam": "ipv4", 00:48:33.158 "trsvcid": "$NVMF_PORT", 00:48:33.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:33.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:33.158 "hdgst": ${hdgst:-false}, 00:48:33.158 "ddgst": ${ddgst:-false} 00:48:33.158 }, 00:48:33.158 "method": "bdev_nvme_attach_controller" 00:48:33.158 } 00:48:33.158 EOF 00:48:33.158 )") 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:33.158 { 00:48:33.158 "params": { 00:48:33.158 "name": "Nvme$subsystem", 00:48:33.158 "trtype": "$TEST_TRANSPORT", 00:48:33.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:33.158 "adrfam": "ipv4", 00:48:33.158 "trsvcid": "$NVMF_PORT", 00:48:33.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:33.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:33.158 "hdgst": ${hdgst:-false}, 00:48:33.158 "ddgst": ${ddgst:-false} 00:48:33.158 }, 00:48:33.158 "method": "bdev_nvme_attach_controller" 00:48:33.158 } 00:48:33.158 EOF 00:48:33.158 )") 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:48:33.158 "params": { 00:48:33.158 "name": "Nvme0", 00:48:33.158 "trtype": "tcp", 00:48:33.158 "traddr": "10.0.0.2", 00:48:33.158 "adrfam": "ipv4", 00:48:33.158 "trsvcid": "4420", 00:48:33.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:33.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:33.158 "hdgst": false, 00:48:33.158 "ddgst": false 00:48:33.158 }, 00:48:33.158 "method": "bdev_nvme_attach_controller" 00:48:33.158 },{ 00:48:33.158 "params": { 00:48:33.158 "name": "Nvme1", 00:48:33.158 "trtype": "tcp", 00:48:33.158 "traddr": "10.0.0.2", 00:48:33.158 "adrfam": "ipv4", 00:48:33.158 "trsvcid": "4420", 00:48:33.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:33.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:33.158 "hdgst": false, 00:48:33.158 "ddgst": false 00:48:33.158 }, 00:48:33.158 "method": "bdev_nvme_attach_controller" 00:48:33.158 }' 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:33.158 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:33.159 17:01:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:33.159 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:33.159 ... 00:48:33.159 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:33.159 ... 00:48:33.159 fio-3.35 00:48:33.159 Starting 4 threads 00:48:33.159 EAL: No free 2048 kB hugepages reported on node 1 00:48:38.523 00:48:38.523 filename0: (groupid=0, jobs=1): err= 0: pid=2983529: Mon Jul 22 17:01:57 2024 00:48:38.523 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5002msec) 00:48:38.523 slat (nsec): min=3997, max=96187, avg=22685.14, stdev=12152.99 00:48:38.523 clat (usec): min=809, max=7541, avg=4313.19, stdev=428.30 00:48:38.523 lat (usec): min=823, max=7577, avg=4335.88, stdev=429.04 00:48:38.523 clat percentiles (usec): 00:48:38.523 | 1.00th=[ 3097], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4146], 00:48:38.523 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:48:38.523 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4686], 00:48:38.523 | 99.00th=[ 5997], 99.50th=[ 6456], 99.90th=[ 6980], 99.95th=[ 7177], 00:48:38.523 | 99.99th=[ 7570] 00:48:38.523 bw ( KiB/s): min=14272, max=15182, per=25.08%, avg=14567.80, stdev=317.27, samples=10 00:48:38.523 iops : min= 1784, max= 1897, avg=1820.90, stdev=39.50, samples=10 00:48:38.523 lat (usec) : 1000=0.01% 00:48:38.523 lat (msec) : 2=0.18%, 4=11.90%, 10=87.92% 00:48:38.523 cpu : usr=93.02%, sys=6.34%, ctx=16, majf=0, minf=40 00:48:38.523 IO depths : 1=0.2%, 2=12.9%, 4=61.1%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:38.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 issued rwts: total=9111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:38.523 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:38.523 filename0: (groupid=0, jobs=1): err= 0: pid=2983530: Mon Jul 22 17:01:57 2024 00:48:38.523 read: IOPS=1806, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5002msec) 00:48:38.523 slat (nsec): min=4316, max=79859, avg=21066.99, stdev=11862.16 00:48:38.523 clat (usec): min=988, max=10438, avg=4360.78, stdev=463.90 00:48:38.523 lat (usec): min=1001, max=10451, avg=4381.85, stdev=463.73 00:48:38.523 clat percentiles (usec): 00:48:38.523 | 1.00th=[ 3294], 5.00th=[ 3785], 10.00th=[ 3982], 20.00th=[ 4178], 00:48:38.523 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:48:38.523 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4752], 00:48:38.523 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 8029], 99.95th=[10421], 00:48:38.523 | 99.99th=[10421] 00:48:38.523 bw ( KiB/s): min=14064, max=14864, per=24.87%, avg=14444.80, stdev=239.68, samples=10 00:48:38.523 iops : min= 1758, max= 1858, avg=1805.60, stdev=29.96, samples=10 00:48:38.523 lat (usec) : 1000=0.01% 00:48:38.523 lat (msec) : 2=0.10%, 4=10.31%, 10=89.49%, 20=0.09% 00:48:38.523 cpu : usr=94.34%, sys=5.14%, ctx=9, majf=0, minf=59 00:48:38.523 IO depths : 1=0.3%, 2=9.4%, 4=64.6%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:38.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 issued rwts: total=9036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:38.523 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:38.523 filename1: (groupid=0, jobs=1): err= 0: pid=2983531: Mon Jul 22 17:01:57 2024 00:48:38.523 read: IOPS=1820, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5004msec) 00:48:38.523 slat (nsec): min=6804, max=69056, avg=24531.94, stdev=9095.64 00:48:38.523 clat (usec): min=979, max=8332, avg=4309.00, stdev=424.85 00:48:38.523 lat (usec): min=1002, max=8369, avg=4333.54, stdev=424.79 00:48:38.523 clat percentiles (usec): 00:48:38.523 | 1.00th=[ 3130], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4146], 00:48:38.523 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:48:38.523 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4686], 00:48:38.523 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 7504], 99.95th=[ 7898], 00:48:38.523 | 99.99th=[ 8356] 00:48:38.523 bw ( KiB/s): min=14336, max=15328, per=25.07%, avg=14561.60, stdev=335.49, samples=10 00:48:38.523 iops : min= 1792, max= 1916, avg=1820.20, stdev=41.94, samples=10 00:48:38.523 lat (usec) : 1000=0.01% 00:48:38.523 lat (msec) : 2=0.09%, 4=12.02%, 10=87.88% 00:48:38.523 cpu : usr=94.42%, sys=4.58%, ctx=126, majf=0, minf=35 00:48:38.523 IO depths : 1=0.2%, 2=15.5%, 4=58.1%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:38.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 issued rwts: total=9109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:38.523 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:38.523 filename1: (groupid=0, jobs=1): err= 0: pid=2983532: Mon Jul 22 17:01:57 2024 00:48:38.523 read: IOPS=1813, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5002msec) 00:48:38.523 slat (nsec): min=4476, max=96288, avg=22447.61, stdev=11464.21 00:48:38.523 clat (usec): min=810, max=8019, avg=4330.97, stdev=419.74 00:48:38.523 lat (usec): min=823, max=8032, avg=4353.42, stdev=420.04 00:48:38.523 clat percentiles (usec): 00:48:38.523 | 1.00th=[ 3228], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4178], 00:48:38.523 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:48:38.523 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4686], 00:48:38.523 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 7111], 99.95th=[ 7373], 00:48:38.523 | 99.99th=[ 8029] 00:48:38.523 bw ( KiB/s): min=14336, max=14944, per=25.02%, avg=14533.33, stdev=194.98, samples=9 00:48:38.523 iops : min= 1792, max= 1868, avg=1816.67, stdev=24.37, samples=9 00:48:38.523 lat (usec) : 1000=0.03% 00:48:38.523 lat (msec) : 2=0.12%, 4=11.02%, 10=88.82% 00:48:38.523 cpu : usr=93.38%, sys=6.02%, ctx=12, majf=0, minf=35 00:48:38.523 IO depths : 1=0.1%, 2=14.8%, 4=59.2%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:38.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:38.523 issued rwts: total=9071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:38.523 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:38.523 00:48:38.523 Run status group 0 (all jobs): 00:48:38.523 READ: bw=56.7MiB/s (59.5MB/s), 14.1MiB/s-14.2MiB/s (14.8MB/s-14.9MB/s), io=284MiB (298MB), run=5002-5004msec 00:48:38.523 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:48:38.523 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:38.523 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:38.523 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 00:48:38.524 real 0m24.173s 00:48:38.524 user 4m29.119s 00:48:38.524 sys 0m8.565s 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 ************************************ 00:48:38.524 END TEST fio_dif_rand_params 00:48:38.524 ************************************ 00:48:38.524 17:01:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:48:38.524 17:01:57 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:48:38.524 17:01:57 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 ************************************ 00:48:38.524 START TEST fio_dif_digest 00:48:38.524 ************************************ 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 bdev_null0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:38.524 [2024-07-22 17:01:57.892102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:48:38.524 { 00:48:38.524 "params": { 00:48:38.524 "name": "Nvme$subsystem", 00:48:38.524 "trtype": "$TEST_TRANSPORT", 00:48:38.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:38.524 "adrfam": "ipv4", 00:48:38.524 "trsvcid": "$NVMF_PORT", 00:48:38.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:38.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:38.524 "hdgst": ${hdgst:-false}, 00:48:38.524 "ddgst": ${ddgst:-false} 00:48:38.524 }, 00:48:38.524 "method": "bdev_nvme_attach_controller" 00:48:38.524 } 00:48:38.524 EOF 00:48:38.524 )") 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:48:38.524 "params": { 00:48:38.524 "name": "Nvme0", 00:48:38.524 "trtype": "tcp", 00:48:38.524 "traddr": "10.0.0.2", 00:48:38.524 "adrfam": "ipv4", 00:48:38.524 "trsvcid": "4420", 00:48:38.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:38.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:38.524 "hdgst": true, 00:48:38.524 "ddgst": true 00:48:38.524 }, 00:48:38.524 "method": "bdev_nvme_attach_controller" 00:48:38.524 }' 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:38.524 17:01:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:38.524 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:38.524 ... 00:48:38.524 fio-3.35 00:48:38.524 Starting 3 threads 00:48:38.806 EAL: No free 2048 kB hugepages reported on node 1 00:48:50.997 00:48:50.997 filename0: (groupid=0, jobs=1): err= 0: pid=2984354: Mon Jul 22 17:02:08 2024 00:48:50.997 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(268MiB/10008msec) 00:48:50.997 slat (nsec): min=4727, max=54657, avg=22550.69, stdev=5143.25 00:48:50.997 clat (usec): min=8427, max=22373, avg=13976.17, stdev=1090.26 00:48:50.997 lat (usec): min=8449, max=22401, avg=13998.72, stdev=1090.23 00:48:50.997 clat percentiles (usec): 00:48:50.997 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12780], 20.00th=[13173], 00:48:50.997 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:48:50.997 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:48:50.997 | 99.00th=[16450], 99.50th=[16909], 99.90th=[21890], 99.95th=[22414], 00:48:50.997 | 99.99th=[22414] 00:48:50.997 bw ( KiB/s): min=26112, max=28672, per=34.04%, avg=27404.80, stdev=697.26, samples=20 00:48:50.997 iops : min= 204, max= 224, avg=214.10, stdev= 5.45, samples=20 00:48:50.997 lat (msec) : 10=0.51%, 20=99.35%, 50=0.14% 00:48:50.997 cpu : usr=93.94%, sys=5.48%, ctx=37, majf=0, minf=66 00:48:50.997 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:50.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:50.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:50.997 filename0: (groupid=0, jobs=1): err= 0: pid=2984355: Mon Jul 22 17:02:08 2024 00:48:50.997 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10046msec) 00:48:50.997 slat (nsec): min=4924, max=44330, avg=17461.61, stdev=5402.32 00:48:50.997 clat (usec): min=11205, max=56439, avg=14440.30, stdev=2236.81 00:48:50.997 lat (usec): min=11218, max=56453, avg=14457.76, stdev=2236.95 00:48:50.997 clat percentiles (usec): 00:48:50.997 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:48:50.997 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:48:50.997 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15795], 95.00th=[16188], 00:48:50.997 | 99.00th=[17171], 99.50th=[17695], 99.90th=[55313], 99.95th=[56361], 00:48:50.997 | 99.99th=[56361] 00:48:50.997 bw ( KiB/s): min=24064, max=27648, per=33.06%, avg=26611.20, stdev=844.88, samples=20 00:48:50.997 iops : min= 188, max= 216, avg=207.90, stdev= 6.60, samples=20 00:48:50.997 lat (msec) : 20=99.62%, 50=0.19%, 100=0.19% 00:48:50.997 cpu : usr=92.65%, sys=6.87%, ctx=20, majf=0, minf=123 00:48:50.997 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:50.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:50.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:50.997 filename0: (groupid=0, jobs=1): err= 0: pid=2984356: Mon Jul 22 17:02:08 2024 00:48:50.997 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(262MiB/10006msec) 00:48:50.997 slat (nsec): min=5007, max=84267, avg=17165.25, stdev=5462.42 00:48:50.997 clat (usec): min=8546, max=24458, avg=14320.73, stdev=1185.30 00:48:50.997 lat (usec): min=8565, max=24502, avg=14337.90, stdev=1185.26 00:48:50.997 clat percentiles (usec): 00:48:50.997 | 1.00th=[11731], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:48:50.997 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:48:50.997 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15795], 95.00th=[16188], 00:48:50.997 | 99.00th=[16909], 99.50th=[17433], 99.90th=[24511], 99.95th=[24511], 00:48:50.997 | 99.99th=[24511] 00:48:50.997 bw ( KiB/s): min=25600, max=28160, per=33.23%, avg=26754.65, stdev=595.57, samples=20 00:48:50.997 iops : min= 200, max= 220, avg=209.00, stdev= 4.66, samples=20 00:48:50.997 lat (msec) : 10=0.43%, 20=99.43%, 50=0.14% 00:48:50.997 cpu : usr=92.69%, sys=6.83%, ctx=30, majf=0, minf=198 00:48:50.997 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:50.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.997 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:50.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:50.997 00:48:50.997 Run status group 0 (all jobs): 00:48:50.997 READ: bw=78.6MiB/s (82.4MB/s), 25.9MiB/s-26.8MiB/s (27.2MB/s-28.1MB/s), io=790MiB (828MB), run=10006-10046msec 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:50.997 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.997 00:48:50.997 real 0m11.052s 00:48:50.997 user 0m29.009s 00:48:50.998 sys 0m2.204s 00:48:50.998 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:50.998 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:50.998 ************************************ 00:48:50.998 END TEST fio_dif_digest 00:48:50.998 ************************************ 00:48:50.998 17:02:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:48:50.998 17:02:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:50.998 rmmod nvme_tcp 00:48:50.998 rmmod nvme_fabrics 00:48:50.998 rmmod nvme_keyring 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2978357 ']' 00:48:50.998 17:02:08 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2978357 00:48:50.998 17:02:08 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 2978357 ']' 00:48:50.998 17:02:08 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 2978357 00:48:50.998 17:02:08 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:48:50.998 17:02:08 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:48:50.998 17:02:08 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2978357 00:48:50.998 17:02:09 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:48:50.998 17:02:09 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:48:50.998 17:02:09 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2978357' 00:48:50.998 killing process with pid 2978357 00:48:50.998 17:02:09 nvmf_dif -- common/autotest_common.sh@965 -- # kill 2978357 00:48:50.998 17:02:09 nvmf_dif -- common/autotest_common.sh@970 -- # wait 2978357 00:48:50.998 17:02:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:48:50.998 17:02:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:50.998 Waiting for block devices as requested 00:48:50.998 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:48:50.998 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:48:50.998 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:48:51.256 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:48:51.256 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:48:51.256 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:48:51.256 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:48:51.514 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:48:51.514 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:48:51.514 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:48:51.772 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:48:51.772 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:48:51.772 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:48:51.772 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:48:52.030 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:48:52.030 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:48:52.030 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:48:52.288 17:02:11 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:52.288 17:02:11 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:52.288 17:02:11 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:52.288 17:02:11 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:52.288 17:02:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:52.288 17:02:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:52.288 17:02:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:54.188 17:02:13 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:48:54.188 00:48:54.188 real 1m7.188s 00:48:54.188 user 6m24.784s 00:48:54.188 sys 0m21.318s 00:48:54.188 17:02:13 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:54.188 17:02:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:54.188 ************************************ 00:48:54.188 END TEST nvmf_dif 00:48:54.188 ************************************ 00:48:54.188 17:02:13 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:48:54.188 17:02:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:48:54.188 17:02:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:54.188 17:02:13 -- common/autotest_common.sh@10 -- # set +x 00:48:54.188 ************************************ 00:48:54.188 START TEST nvmf_abort_qd_sizes 00:48:54.188 ************************************ 00:48:54.188 17:02:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:48:54.447 * Looking for test storage... 00:48:54.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:48:54.447 17:02:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:48:56.976 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:48:56.977 Found 0000:82:00.0 (0x8086 - 0x159b) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:48:56.977 Found 0000:82:00.1 (0x8086 - 0x159b) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:48:56.977 Found net devices under 0000:82:00.0: cvl_0_0 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:48:56.977 Found net devices under 0000:82:00.1: cvl_0_1 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:48:56.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:56.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:48:56.977 00:48:56.977 --- 10.0.0.2 ping statistics --- 00:48:56.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:56.977 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:56.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:56.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:48:56.977 00:48:56.977 --- 10.0.0.1 ping statistics --- 00:48:56.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:56.977 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:48:56.977 17:02:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:57.912 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:48:57.912 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:48:57.912 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:48:57.912 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:48:57.912 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:48:58.170 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:48:58.170 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:48:58.170 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:48:58.170 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:48:58.170 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:49:00.072 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2989691 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2989691 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 2989691 ']' 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:00.072 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:00.072 [2024-07-22 17:02:19.469212] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:00.072 [2024-07-22 17:02:19.469308] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:00.072 EAL: No free 2048 kB hugepages reported on node 1 00:49:00.072 [2024-07-22 17:02:19.549005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:00.072 [2024-07-22 17:02:19.641332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:00.072 [2024-07-22 17:02:19.641394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:00.072 [2024-07-22 17:02:19.641424] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:00.072 [2024-07-22 17:02:19.641439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:00.072 [2024-07-22 17:02:19.641452] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:00.072 [2024-07-22 17:02:19.641538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:00.072 [2024-07-22 17:02:19.641606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:00.072 [2024-07-22 17:02:19.641697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:49:00.072 [2024-07-22 17:02:19.641700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:81:00.0 ]] 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:81:00.0 ]] 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:81:00.0 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:81:00.0 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:00.330 17:02:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:00.330 ************************************ 00:49:00.330 START TEST spdk_target_abort 00:49:00.330 ************************************ 00:49:00.330 17:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:49:00.330 17:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:49:00.330 17:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:81:00.0 -b spdk_target 00:49:00.330 17:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:00.330 17:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:03.609 spdk_targetn1 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:03.609 [2024-07-22 17:02:22.678768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:03.609 [2024-07-22 17:02:22.711026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:03.609 17:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:03.609 EAL: No free 2048 kB hugepages reported on node 1 00:49:06.928 Initializing NVMe Controllers 00:49:06.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:06.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:06.928 Initialization complete. Launching workers. 00:49:06.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11417, failed: 0 00:49:06.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1300, failed to submit 10117 00:49:06.928 success 715, unsuccess 585, failed 0 00:49:06.928 17:02:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:06.928 17:02:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:06.928 EAL: No free 2048 kB hugepages reported on node 1 00:49:10.206 Initializing NVMe Controllers 00:49:10.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:10.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:10.206 Initialization complete. Launching workers. 00:49:10.206 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8644, failed: 0 00:49:10.206 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 7389 00:49:10.206 success 302, unsuccess 953, failed 0 00:49:10.206 17:02:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:10.206 17:02:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:10.206 EAL: No free 2048 kB hugepages reported on node 1 00:49:13.486 Initializing NVMe Controllers 00:49:13.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:13.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:13.486 Initialization complete. Launching workers. 00:49:13.486 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31481, failed: 0 00:49:13.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2772, failed to submit 28709 00:49:13.486 success 526, unsuccess 2246, failed 0 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:13.486 17:02:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2989691 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 2989691 ']' 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 2989691 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2989691 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2989691' 00:49:15.384 killing process with pid 2989691 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 2989691 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 2989691 00:49:15.384 00:49:15.384 real 0m15.079s 00:49:15.384 user 0m57.003s 00:49:15.384 sys 0m3.013s 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:15.384 ************************************ 00:49:15.384 END TEST spdk_target_abort 00:49:15.384 ************************************ 00:49:15.384 17:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:49:15.384 17:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:49:15.384 17:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:15.384 17:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:15.384 ************************************ 00:49:15.384 START TEST kernel_target_abort 00:49:15.384 ************************************ 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:15.384 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:15.385 17:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:16.759 Waiting for block devices as requested 00:49:16.759 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:49:17.018 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:49:17.018 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:49:17.018 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:49:17.276 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:49:17.276 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:49:17.276 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:49:17.276 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:49:17.276 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:49:17.535 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:49:17.535 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:49:17.535 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:49:17.535 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:49:17.793 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:49:17.793 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:49:17.793 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:49:18.051 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:49:18.051 No valid GPT data, bailing 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:18.051 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:49:18.310 00:49:18.310 Discovery Log Number of Records 2, Generation counter 2 00:49:18.310 =====Discovery Log Entry 0====== 00:49:18.310 trtype: tcp 00:49:18.310 adrfam: ipv4 00:49:18.310 subtype: current discovery subsystem 00:49:18.310 treq: not specified, sq flow control disable supported 00:49:18.310 portid: 1 00:49:18.310 trsvcid: 4420 00:49:18.310 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:18.310 traddr: 10.0.0.1 00:49:18.310 eflags: none 00:49:18.310 sectype: none 00:49:18.310 =====Discovery Log Entry 1====== 00:49:18.310 trtype: tcp 00:49:18.310 adrfam: ipv4 00:49:18.310 subtype: nvme subsystem 00:49:18.310 treq: not specified, sq flow control disable supported 00:49:18.310 portid: 1 00:49:18.310 trsvcid: 4420 00:49:18.310 subnqn: nqn.2016-06.io.spdk:testnqn 00:49:18.310 traddr: 10.0.0.1 00:49:18.310 eflags: none 00:49:18.310 sectype: none 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:18.310 17:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:18.310 EAL: No free 2048 kB hugepages reported on node 1 00:49:21.593 Initializing NVMe Controllers 00:49:21.593 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:21.593 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:21.593 Initialization complete. Launching workers. 00:49:21.593 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37447, failed: 0 00:49:21.593 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37447, failed to submit 0 00:49:21.593 success 0, unsuccess 37447, failed 0 00:49:21.593 17:02:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:21.593 17:02:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:21.593 EAL: No free 2048 kB hugepages reported on node 1 00:49:24.887 Initializing NVMe Controllers 00:49:24.887 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:24.887 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:24.887 Initialization complete. Launching workers. 00:49:24.887 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75800, failed: 0 00:49:24.887 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19114, failed to submit 56686 00:49:24.887 success 0, unsuccess 19114, failed 0 00:49:24.887 17:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:24.887 17:02:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:24.887 EAL: No free 2048 kB hugepages reported on node 1 00:49:28.353 Initializing NVMe Controllers 00:49:28.353 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:28.353 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:28.353 Initialization complete. Launching workers. 00:49:28.353 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69867, failed: 0 00:49:28.353 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17462, failed to submit 52405 00:49:28.353 success 0, unsuccess 17462, failed 0 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:49:28.353 17:02:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:49:28.953 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:49:28.953 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:49:28.953 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:49:30.855 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:49:30.855 00:49:30.855 real 0m15.393s 00:49:30.855 user 0m5.872s 00:49:30.855 sys 0m3.548s 00:49:30.855 17:02:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:30.855 17:02:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:30.855 ************************************ 00:49:30.855 END TEST kernel_target_abort 00:49:30.855 ************************************ 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:30.855 rmmod nvme_tcp 00:49:30.855 rmmod nvme_fabrics 00:49:30.855 rmmod nvme_keyring 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2989691 ']' 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2989691 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 2989691 ']' 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 2989691 00:49:30.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2989691) - No such process 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 2989691 is not found' 00:49:30.855 Process with pid 2989691 is not found 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:49:30.855 17:02:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:32.230 Waiting for block devices as requested 00:49:32.231 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:49:32.488 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:49:32.488 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:49:32.488 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:49:32.746 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:49:32.746 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:49:32.746 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:49:32.746 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:49:33.005 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:49:33.005 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:49:33.005 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:49:33.005 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:49:33.263 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:49:33.263 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:49:33.263 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:49:33.263 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:49:33.520 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:33.520 17:02:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:36.048 17:02:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:49:36.048 00:49:36.048 real 0m41.331s 00:49:36.049 user 1m5.246s 00:49:36.049 sys 0m10.318s 00:49:36.049 17:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:36.049 17:02:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:36.049 ************************************ 00:49:36.049 END TEST nvmf_abort_qd_sizes 00:49:36.049 ************************************ 00:49:36.049 17:02:55 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:49:36.049 17:02:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:49:36.049 17:02:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:36.049 17:02:55 -- common/autotest_common.sh@10 -- # set +x 00:49:36.049 ************************************ 00:49:36.049 START TEST keyring_file 00:49:36.049 ************************************ 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:49:36.049 * Looking for test storage... 00:49:36.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:36.049 17:02:55 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:36.049 17:02:55 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:36.049 17:02:55 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:36.049 17:02:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:36.049 17:02:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:36.049 17:02:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:36.049 17:02:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:49:36.049 17:02:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@47 -- # : 0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fgkxXLsVV7 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fgkxXLsVV7 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fgkxXLsVV7 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fgkxXLsVV7 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MJfDHPlTIY 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:49:36.049 17:02:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MJfDHPlTIY 00:49:36.049 17:02:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MJfDHPlTIY 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MJfDHPlTIY 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=2995875 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:49:36.049 17:02:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2995875 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2995875 ']' 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:36.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:36.049 17:02:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:36.049 [2024-07-22 17:02:55.394888] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:36.049 [2024-07-22 17:02:55.395014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995875 ] 00:49:36.049 EAL: No free 2048 kB hugepages reported on node 1 00:49:36.049 [2024-07-22 17:02:55.464553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:36.049 [2024-07-22 17:02:55.553705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:49:36.308 17:02:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:36.308 [2024-07-22 17:02:55.815557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:36.308 null0 00:49:36.308 [2024-07-22 17:02:55.847613] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:36.308 [2024-07-22 17:02:55.848179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:36.308 [2024-07-22 17:02:55.855625] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.308 17:02:55 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:36.308 [2024-07-22 17:02:55.867649] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:49:36.308 request: 00:49:36.308 { 00:49:36.308 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:49:36.308 "secure_channel": false, 00:49:36.308 "listen_address": { 00:49:36.308 "trtype": "tcp", 00:49:36.308 "traddr": "127.0.0.1", 00:49:36.308 "trsvcid": "4420" 00:49:36.308 }, 00:49:36.308 "method": "nvmf_subsystem_add_listener", 00:49:36.308 "req_id": 1 00:49:36.308 } 00:49:36.308 Got JSON-RPC error response 00:49:36.308 response: 00:49:36.308 { 00:49:36.308 "code": -32602, 00:49:36.308 "message": "Invalid parameters" 00:49:36.308 } 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:36.308 17:02:55 keyring_file -- keyring/file.sh@46 -- # bperfpid=2995883 00:49:36.308 17:02:55 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:49:36.308 17:02:55 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2995883 /var/tmp/bperf.sock 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2995883 ']' 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:36.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:36.308 17:02:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:36.308 [2024-07-22 17:02:55.914528] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:36.308 [2024-07-22 17:02:55.914590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995883 ] 00:49:36.308 EAL: No free 2048 kB hugepages reported on node 1 00:49:36.566 [2024-07-22 17:02:55.985673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:36.566 [2024-07-22 17:02:56.077076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:36.566 17:02:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:36.566 17:02:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:49:36.566 17:02:56 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:36.566 17:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:36.823 17:02:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MJfDHPlTIY 00:49:36.823 17:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MJfDHPlTIY 00:49:37.080 17:02:56 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:49:37.081 17:02:56 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:49:37.081 17:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:37.081 17:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:37.081 17:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:37.338 17:02:56 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.fgkxXLsVV7 == \/\t\m\p\/\t\m\p\.\f\g\k\x\X\L\s\V\V\7 ]] 00:49:37.338 17:02:56 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:49:37.338 17:02:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:49:37.338 17:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:37.338 17:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:37.338 17:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:37.596 17:02:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MJfDHPlTIY == \/\t\m\p\/\t\m\p\.\M\J\f\D\H\P\l\T\I\Y ]] 00:49:37.596 17:02:57 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:49:37.596 17:02:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:37.596 17:02:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:37.596 17:02:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:37.596 17:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:37.596 17:02:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:37.853 17:02:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:49:37.853 17:02:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:49:37.853 17:02:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:37.853 17:02:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:37.853 17:02:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:37.853 17:02:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:37.853 17:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:38.110 17:02:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:49:38.110 17:02:57 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:38.110 17:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:38.367 [2024-07-22 17:02:57.881941] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:38.367 nvme0n1 00:49:38.367 17:02:57 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:49:38.367 17:02:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:38.367 17:02:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:38.367 17:02:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:38.367 17:02:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:38.367 17:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:38.625 17:02:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:49:38.625 17:02:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:49:38.625 17:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:38.625 17:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:38.625 17:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:38.625 17:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:38.625 17:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:38.882 17:02:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:49:38.882 17:02:58 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:39.138 Running I/O for 1 seconds... 00:49:40.070 00:49:40.070 Latency(us) 00:49:40.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:40.070 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:49:40.070 nvme0n1 : 1.01 7532.54 29.42 0.00 0.00 16887.18 9126.49 27767.85 00:49:40.070 =================================================================================================================== 00:49:40.070 Total : 7532.54 29.42 0.00 0.00 16887.18 9126.49 27767.85 00:49:40.070 0 00:49:40.070 17:02:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:40.070 17:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:40.328 17:02:59 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:49:40.328 17:02:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:40.328 17:02:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:40.328 17:02:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:40.328 17:02:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:40.328 17:02:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:40.586 17:03:00 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:49:40.586 17:03:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:49:40.586 17:03:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:40.586 17:03:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:40.586 17:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:40.586 17:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:40.586 17:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:40.844 17:03:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:49:40.844 17:03:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:40.844 17:03:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:40.844 17:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:41.102 [2024-07-22 17:03:00.638869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:49:41.102 [2024-07-22 17:03:00.639324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4310 (107): Transport endpoint is not connected 00:49:41.102 [2024-07-22 17:03:00.640319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4310 (9): Bad file descriptor 00:49:41.102 [2024-07-22 17:03:00.641316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:41.102 [2024-07-22 17:03:00.641353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:49:41.102 [2024-07-22 17:03:00.641378] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:41.102 request: 00:49:41.102 { 00:49:41.102 "name": "nvme0", 00:49:41.102 "trtype": "tcp", 00:49:41.102 "traddr": "127.0.0.1", 00:49:41.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:41.102 "adrfam": "ipv4", 00:49:41.102 "trsvcid": "4420", 00:49:41.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:41.102 "psk": "key1", 00:49:41.102 "method": "bdev_nvme_attach_controller", 00:49:41.102 "req_id": 1 00:49:41.102 } 00:49:41.102 Got JSON-RPC error response 00:49:41.102 response: 00:49:41.102 { 00:49:41.102 "code": -5, 00:49:41.102 "message": "Input/output error" 00:49:41.102 } 00:49:41.102 17:03:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:49:41.102 17:03:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:41.102 17:03:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:41.102 17:03:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:41.102 17:03:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:49:41.102 17:03:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:41.102 17:03:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:41.102 17:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:41.102 17:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:41.102 17:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:41.359 17:03:00 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:49:41.359 17:03:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:49:41.359 17:03:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:41.359 17:03:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:41.359 17:03:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:41.359 17:03:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:41.359 17:03:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:41.618 17:03:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:49:41.618 17:03:01 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:49:41.618 17:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:41.875 17:03:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:49:41.875 17:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:49:42.133 17:03:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:49:42.133 17:03:01 keyring_file -- keyring/file.sh@77 -- # jq length 00:49:42.133 17:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:42.391 17:03:01 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:49:42.391 17:03:01 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fgkxXLsVV7 00:49:42.391 17:03:01 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:42.391 17:03:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.391 17:03:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.649 [2024-07-22 17:03:02.158232] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fgkxXLsVV7': 0100660 00:49:42.649 [2024-07-22 17:03:02.158281] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:49:42.649 request: 00:49:42.649 { 00:49:42.649 "name": "key0", 00:49:42.649 "path": "/tmp/tmp.fgkxXLsVV7", 00:49:42.649 "method": "keyring_file_add_key", 00:49:42.649 "req_id": 1 00:49:42.649 } 00:49:42.649 Got JSON-RPC error response 00:49:42.649 response: 00:49:42.649 { 00:49:42.649 "code": -1, 00:49:42.649 "message": "Operation not permitted" 00:49:42.649 } 00:49:42.649 17:03:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:49:42.649 17:03:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:42.649 17:03:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:42.649 17:03:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:42.649 17:03:02 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fgkxXLsVV7 00:49:42.649 17:03:02 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.649 17:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fgkxXLsVV7 00:49:42.907 17:03:02 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fgkxXLsVV7 00:49:42.907 17:03:02 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:49:42.907 17:03:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:42.907 17:03:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:42.907 17:03:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:42.907 17:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:42.907 17:03:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:43.165 17:03:02 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:49:43.165 17:03:02 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:43.165 17:03:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:43.165 17:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:43.423 [2024-07-22 17:03:02.924331] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fgkxXLsVV7': No such file or directory 00:49:43.423 [2024-07-22 17:03:02.924366] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:49:43.423 [2024-07-22 17:03:02.924403] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:49:43.423 [2024-07-22 17:03:02.924414] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:49:43.423 [2024-07-22 17:03:02.924426] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:49:43.423 request: 00:49:43.423 { 00:49:43.423 "name": "nvme0", 00:49:43.423 "trtype": "tcp", 00:49:43.423 "traddr": "127.0.0.1", 00:49:43.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:43.423 "adrfam": "ipv4", 00:49:43.423 "trsvcid": "4420", 00:49:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:43.423 "psk": "key0", 00:49:43.423 "method": "bdev_nvme_attach_controller", 00:49:43.423 "req_id": 1 00:49:43.423 } 00:49:43.423 Got JSON-RPC error response 00:49:43.423 response: 00:49:43.423 { 00:49:43.423 "code": -19, 00:49:43.423 "message": "No such device" 00:49:43.423 } 00:49:43.423 17:03:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:49:43.423 17:03:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:43.423 17:03:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:43.423 17:03:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:43.423 17:03:02 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:49:43.423 17:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:43.681 17:03:03 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:43.681 17:03:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f38UgEXU6m 00:49:43.682 17:03:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:49:43.682 17:03:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:49:43.682 17:03:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f38UgEXU6m 00:49:43.682 17:03:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f38UgEXU6m 00:49:43.682 17:03:03 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.f38UgEXU6m 00:49:43.682 17:03:03 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f38UgEXU6m 00:49:43.682 17:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f38UgEXU6m 00:49:43.940 17:03:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:43.940 17:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:44.198 nvme0n1 00:49:44.198 17:03:03 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:49:44.198 17:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:44.198 17:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:44.198 17:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:44.198 17:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:44.198 17:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:44.456 17:03:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:49:44.456 17:03:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:49:44.456 17:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:44.714 17:03:04 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:49:44.714 17:03:04 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:49:44.714 17:03:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:44.714 17:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:44.714 17:03:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:44.972 17:03:04 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:49:44.972 17:03:04 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:49:44.972 17:03:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:44.972 17:03:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:44.972 17:03:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:44.972 17:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:44.972 17:03:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:45.230 17:03:04 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:49:45.230 17:03:04 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:45.230 17:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:45.488 17:03:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:49:45.488 17:03:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:49:45.488 17:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:45.746 17:03:05 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:49:45.746 17:03:05 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f38UgEXU6m 00:49:45.746 17:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f38UgEXU6m 00:49:46.004 17:03:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MJfDHPlTIY 00:49:46.004 17:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MJfDHPlTIY 00:49:46.263 17:03:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:46.263 17:03:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:46.521 nvme0n1 00:49:46.521 17:03:06 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:49:46.521 17:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:49:47.088 17:03:06 keyring_file -- keyring/file.sh@112 -- # config='{ 00:49:47.088 "subsystems": [ 00:49:47.088 { 00:49:47.088 "subsystem": "keyring", 00:49:47.088 "config": [ 00:49:47.088 { 00:49:47.088 "method": "keyring_file_add_key", 00:49:47.088 "params": { 00:49:47.088 "name": "key0", 00:49:47.088 "path": "/tmp/tmp.f38UgEXU6m" 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "keyring_file_add_key", 00:49:47.088 "params": { 00:49:47.088 "name": "key1", 00:49:47.088 "path": "/tmp/tmp.MJfDHPlTIY" 00:49:47.088 } 00:49:47.088 } 00:49:47.088 ] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "iobuf", 00:49:47.088 "config": [ 00:49:47.088 { 00:49:47.088 "method": "iobuf_set_options", 00:49:47.088 "params": { 00:49:47.088 "small_pool_count": 8192, 00:49:47.088 "large_pool_count": 1024, 00:49:47.088 "small_bufsize": 8192, 00:49:47.088 "large_bufsize": 135168 00:49:47.088 } 00:49:47.088 } 00:49:47.088 ] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "sock", 00:49:47.088 "config": [ 00:49:47.088 { 00:49:47.088 "method": "sock_set_default_impl", 00:49:47.088 "params": { 00:49:47.088 "impl_name": "posix" 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "sock_impl_set_options", 00:49:47.088 "params": { 00:49:47.088 "impl_name": "ssl", 00:49:47.088 "recv_buf_size": 4096, 00:49:47.088 "send_buf_size": 4096, 00:49:47.088 "enable_recv_pipe": true, 00:49:47.088 "enable_quickack": false, 00:49:47.088 "enable_placement_id": 0, 00:49:47.088 "enable_zerocopy_send_server": true, 00:49:47.088 "enable_zerocopy_send_client": false, 00:49:47.088 "zerocopy_threshold": 0, 00:49:47.088 "tls_version": 0, 00:49:47.088 "enable_ktls": false 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "sock_impl_set_options", 00:49:47.088 "params": { 00:49:47.088 "impl_name": "posix", 00:49:47.088 "recv_buf_size": 2097152, 00:49:47.088 "send_buf_size": 2097152, 00:49:47.088 "enable_recv_pipe": true, 00:49:47.088 "enable_quickack": false, 00:49:47.088 "enable_placement_id": 0, 00:49:47.088 "enable_zerocopy_send_server": true, 00:49:47.088 "enable_zerocopy_send_client": false, 00:49:47.088 "zerocopy_threshold": 0, 00:49:47.088 "tls_version": 0, 00:49:47.088 "enable_ktls": false 00:49:47.088 } 00:49:47.088 } 00:49:47.088 ] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "vmd", 00:49:47.088 "config": [] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "accel", 00:49:47.088 "config": [ 00:49:47.088 { 00:49:47.088 "method": "accel_set_options", 00:49:47.088 "params": { 00:49:47.088 "small_cache_size": 128, 00:49:47.088 "large_cache_size": 16, 00:49:47.088 "task_count": 2048, 00:49:47.088 "sequence_count": 2048, 00:49:47.088 "buf_count": 2048 00:49:47.088 } 00:49:47.088 } 00:49:47.088 ] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "bdev", 00:49:47.088 "config": [ 00:49:47.088 { 00:49:47.088 "method": "bdev_set_options", 00:49:47.088 "params": { 00:49:47.088 "bdev_io_pool_size": 65535, 00:49:47.088 "bdev_io_cache_size": 256, 00:49:47.088 "bdev_auto_examine": true, 00:49:47.088 "iobuf_small_cache_size": 128, 00:49:47.088 "iobuf_large_cache_size": 16 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_raid_set_options", 00:49:47.088 "params": { 00:49:47.088 "process_window_size_kb": 1024 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_iscsi_set_options", 00:49:47.088 "params": { 00:49:47.088 "timeout_sec": 30 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_nvme_set_options", 00:49:47.088 "params": { 00:49:47.088 "action_on_timeout": "none", 00:49:47.088 "timeout_us": 0, 00:49:47.088 "timeout_admin_us": 0, 00:49:47.088 "keep_alive_timeout_ms": 10000, 00:49:47.088 "arbitration_burst": 0, 00:49:47.088 "low_priority_weight": 0, 00:49:47.088 "medium_priority_weight": 0, 00:49:47.088 "high_priority_weight": 0, 00:49:47.088 "nvme_adminq_poll_period_us": 10000, 00:49:47.088 "nvme_ioq_poll_period_us": 0, 00:49:47.088 "io_queue_requests": 512, 00:49:47.088 "delay_cmd_submit": true, 00:49:47.088 "transport_retry_count": 4, 00:49:47.088 "bdev_retry_count": 3, 00:49:47.088 "transport_ack_timeout": 0, 00:49:47.088 "ctrlr_loss_timeout_sec": 0, 00:49:47.088 "reconnect_delay_sec": 0, 00:49:47.088 "fast_io_fail_timeout_sec": 0, 00:49:47.088 "disable_auto_failback": false, 00:49:47.088 "generate_uuids": false, 00:49:47.088 "transport_tos": 0, 00:49:47.088 "nvme_error_stat": false, 00:49:47.088 "rdma_srq_size": 0, 00:49:47.088 "io_path_stat": false, 00:49:47.088 "allow_accel_sequence": false, 00:49:47.088 "rdma_max_cq_size": 0, 00:49:47.088 "rdma_cm_event_timeout_ms": 0, 00:49:47.088 "dhchap_digests": [ 00:49:47.088 "sha256", 00:49:47.088 "sha384", 00:49:47.088 "sha512" 00:49:47.088 ], 00:49:47.088 "dhchap_dhgroups": [ 00:49:47.088 "null", 00:49:47.088 "ffdhe2048", 00:49:47.088 "ffdhe3072", 00:49:47.088 "ffdhe4096", 00:49:47.088 "ffdhe6144", 00:49:47.088 "ffdhe8192" 00:49:47.088 ] 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_nvme_attach_controller", 00:49:47.088 "params": { 00:49:47.088 "name": "nvme0", 00:49:47.088 "trtype": "TCP", 00:49:47.088 "adrfam": "IPv4", 00:49:47.088 "traddr": "127.0.0.1", 00:49:47.088 "trsvcid": "4420", 00:49:47.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:47.088 "prchk_reftag": false, 00:49:47.088 "prchk_guard": false, 00:49:47.088 "ctrlr_loss_timeout_sec": 0, 00:49:47.088 "reconnect_delay_sec": 0, 00:49:47.088 "fast_io_fail_timeout_sec": 0, 00:49:47.088 "psk": "key0", 00:49:47.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:47.088 "hdgst": false, 00:49:47.088 "ddgst": false 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_nvme_set_hotplug", 00:49:47.088 "params": { 00:49:47.088 "period_us": 100000, 00:49:47.088 "enable": false 00:49:47.088 } 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "method": "bdev_wait_for_examine" 00:49:47.088 } 00:49:47.088 ] 00:49:47.088 }, 00:49:47.088 { 00:49:47.088 "subsystem": "nbd", 00:49:47.088 "config": [] 00:49:47.088 } 00:49:47.089 ] 00:49:47.089 }' 00:49:47.089 17:03:06 keyring_file -- keyring/file.sh@114 -- # killprocess 2995883 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2995883 ']' 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2995883 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@951 -- # uname 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2995883 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2995883' 00:49:47.089 killing process with pid 2995883 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@965 -- # kill 2995883 00:49:47.089 Received shutdown signal, test time was about 1.000000 seconds 00:49:47.089 00:49:47.089 Latency(us) 00:49:47.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:47.089 =================================================================================================================== 00:49:47.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@970 -- # wait 2995883 00:49:47.089 17:03:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=2997450 00:49:47.089 17:03:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2997450 /var/tmp/bperf.sock 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2997450 ']' 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:47.089 17:03:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:47.089 17:03:06 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:47.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:47.089 17:03:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:49:47.089 "subsystems": [ 00:49:47.089 { 00:49:47.089 "subsystem": "keyring", 00:49:47.089 "config": [ 00:49:47.089 { 00:49:47.089 "method": "keyring_file_add_key", 00:49:47.089 "params": { 00:49:47.089 "name": "key0", 00:49:47.089 "path": "/tmp/tmp.f38UgEXU6m" 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "keyring_file_add_key", 00:49:47.089 "params": { 00:49:47.089 "name": "key1", 00:49:47.089 "path": "/tmp/tmp.MJfDHPlTIY" 00:49:47.089 } 00:49:47.089 } 00:49:47.089 ] 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "subsystem": "iobuf", 00:49:47.089 "config": [ 00:49:47.089 { 00:49:47.089 "method": "iobuf_set_options", 00:49:47.089 "params": { 00:49:47.089 "small_pool_count": 8192, 00:49:47.089 "large_pool_count": 1024, 00:49:47.089 "small_bufsize": 8192, 00:49:47.089 "large_bufsize": 135168 00:49:47.089 } 00:49:47.089 } 00:49:47.089 ] 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "subsystem": "sock", 00:49:47.089 "config": [ 00:49:47.089 { 00:49:47.089 "method": "sock_set_default_impl", 00:49:47.089 "params": { 00:49:47.089 "impl_name": "posix" 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "sock_impl_set_options", 00:49:47.089 "params": { 00:49:47.089 "impl_name": "ssl", 00:49:47.089 "recv_buf_size": 4096, 00:49:47.089 "send_buf_size": 4096, 00:49:47.089 "enable_recv_pipe": true, 00:49:47.089 "enable_quickack": false, 00:49:47.089 "enable_placement_id": 0, 00:49:47.089 "enable_zerocopy_send_server": true, 00:49:47.089 "enable_zerocopy_send_client": false, 00:49:47.089 "zerocopy_threshold": 0, 00:49:47.089 "tls_version": 0, 00:49:47.089 "enable_ktls": false 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "sock_impl_set_options", 00:49:47.089 "params": { 00:49:47.089 "impl_name": "posix", 00:49:47.089 "recv_buf_size": 2097152, 00:49:47.089 "send_buf_size": 2097152, 00:49:47.089 "enable_recv_pipe": true, 00:49:47.089 "enable_quickack": false, 00:49:47.089 "enable_placement_id": 0, 00:49:47.089 "enable_zerocopy_send_server": true, 00:49:47.089 "enable_zerocopy_send_client": false, 00:49:47.089 "zerocopy_threshold": 0, 00:49:47.089 "tls_version": 0, 00:49:47.089 "enable_ktls": false 00:49:47.089 } 00:49:47.089 } 00:49:47.089 ] 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "subsystem": "vmd", 00:49:47.089 "config": [] 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "subsystem": "accel", 00:49:47.089 "config": [ 00:49:47.089 { 00:49:47.089 "method": "accel_set_options", 00:49:47.089 "params": { 00:49:47.089 "small_cache_size": 128, 00:49:47.089 "large_cache_size": 16, 00:49:47.089 "task_count": 2048, 00:49:47.089 "sequence_count": 2048, 00:49:47.089 "buf_count": 2048 00:49:47.089 } 00:49:47.089 } 00:49:47.089 ] 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "subsystem": "bdev", 00:49:47.089 "config": [ 00:49:47.089 { 00:49:47.089 "method": "bdev_set_options", 00:49:47.089 "params": { 00:49:47.089 "bdev_io_pool_size": 65535, 00:49:47.089 "bdev_io_cache_size": 256, 00:49:47.089 "bdev_auto_examine": true, 00:49:47.089 "iobuf_small_cache_size": 128, 00:49:47.089 "iobuf_large_cache_size": 16 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "bdev_raid_set_options", 00:49:47.089 "params": { 00:49:47.089 "process_window_size_kb": 1024 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "bdev_iscsi_set_options", 00:49:47.089 "params": { 00:49:47.089 "timeout_sec": 30 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "bdev_nvme_set_options", 00:49:47.089 "params": { 00:49:47.089 "action_on_timeout": "none", 00:49:47.089 "timeout_us": 0, 00:49:47.089 "timeout_admin_us": 0, 00:49:47.089 "keep_alive_timeout_ms": 10000, 00:49:47.089 "arbitration_burst": 0, 00:49:47.089 "low_priority_weight": 0, 00:49:47.089 "medium_priority_weight": 0, 00:49:47.089 "high_priority_weight": 0, 00:49:47.089 "nvme_adminq_poll_period_us": 10000, 00:49:47.089 "nvme_ioq_poll_period_us": 0, 00:49:47.089 "io_queue_requests": 512, 00:49:47.089 "delay_cmd_submit": true, 00:49:47.089 "transport_retry_count": 4, 00:49:47.089 "bdev_retry_count": 3, 00:49:47.089 "transport_ack_timeout": 0, 00:49:47.089 "ctrlr_loss_timeout_sec": 0, 00:49:47.089 "reconnect_delay_sec": 0, 00:49:47.089 "fast_io_fail_timeout_sec": 0, 00:49:47.089 "disable_auto_failback": false, 00:49:47.089 "generate_uuids": false, 00:49:47.089 "transport_tos": 0, 00:49:47.089 "nvme_error_stat": false, 00:49:47.089 "rdma_srq_size": 0, 00:49:47.089 "io_path_stat": false, 00:49:47.089 "allow_accel_sequence": false, 00:49:47.089 "rdma_max_cq_size": 0, 00:49:47.089 "rdma_cm_event_timeout_ms": 0, 00:49:47.089 "dhchap_digests": [ 00:49:47.089 "sha256", 00:49:47.089 "sha384", 00:49:47.089 "sha512" 00:49:47.089 ], 00:49:47.089 "dhchap_dhgroups": [ 00:49:47.089 "null", 00:49:47.089 "ffdhe2048", 00:49:47.089 "ffdhe3072", 00:49:47.089 "ffdhe4096", 00:49:47.089 "ffdhe6144", 00:49:47.089 "ffdhe8192" 00:49:47.089 ] 00:49:47.089 } 00:49:47.089 }, 00:49:47.089 { 00:49:47.089 "method": "bdev_nvme_attach_controller", 00:49:47.089 "params": { 00:49:47.089 "name": "nvme0", 00:49:47.089 "trtype": "TCP", 00:49:47.090 "adrfam": "IPv4", 00:49:47.090 "traddr": "127.0.0.1", 00:49:47.090 "trsvcid": "4420", 00:49:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:47.090 "prchk_reftag": false, 00:49:47.090 "prchk_guard": false, 00:49:47.090 "ctrlr_loss_timeout_sec": 0, 00:49:47.090 "reconnect_delay_sec": 0, 00:49:47.090 "fast_io_fail_timeout_sec": 0, 00:49:47.090 "psk": "key0", 00:49:47.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:47.090 "hdgst": false, 00:49:47.090 "ddgst": false 00:49:47.090 } 00:49:47.090 }, 00:49:47.090 { 00:49:47.090 "method": "bdev_nvme_set_hotplug", 00:49:47.090 "params": { 00:49:47.090 "period_us": 100000, 00:49:47.090 "enable": false 00:49:47.090 } 00:49:47.090 }, 00:49:47.090 { 00:49:47.090 "method": "bdev_wait_for_examine" 00:49:47.090 } 00:49:47.090 ] 00:49:47.090 }, 00:49:47.090 { 00:49:47.090 "subsystem": "nbd", 00:49:47.090 "config": [] 00:49:47.090 } 00:49:47.090 ] 00:49:47.090 }' 00:49:47.090 17:03:06 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:47.090 17:03:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:47.348 [2024-07-22 17:03:06.772132] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:47.348 [2024-07-22 17:03:06.772235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997450 ] 00:49:47.348 EAL: No free 2048 kB hugepages reported on node 1 00:49:47.348 [2024-07-22 17:03:06.844258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:47.348 [2024-07-22 17:03:06.933249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:47.615 [2024-07-22 17:03:07.120117] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:48.180 17:03:07 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:48.180 17:03:07 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:49:48.180 17:03:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:49:48.180 17:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:48.180 17:03:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:49:48.438 17:03:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:49:48.438 17:03:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:49:48.438 17:03:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:48.438 17:03:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:48.438 17:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:48.438 17:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:48.438 17:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:48.696 17:03:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:49:48.696 17:03:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:49:48.696 17:03:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:48.696 17:03:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:48.696 17:03:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:48.696 17:03:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:48.696 17:03:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:48.955 17:03:08 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:49:48.955 17:03:08 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:49:48.955 17:03:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:49:48.955 17:03:08 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:49:49.213 17:03:08 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:49:49.213 17:03:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:49:49.213 17:03:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.f38UgEXU6m /tmp/tmp.MJfDHPlTIY 00:49:49.213 17:03:08 keyring_file -- keyring/file.sh@20 -- # killprocess 2997450 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2997450 ']' 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2997450 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2997450 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2997450' 00:49:49.213 killing process with pid 2997450 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@965 -- # kill 2997450 00:49:49.213 Received shutdown signal, test time was about 1.000000 seconds 00:49:49.213 00:49:49.213 Latency(us) 00:49:49.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:49.213 =================================================================================================================== 00:49:49.213 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:49:49.213 17:03:08 keyring_file -- common/autotest_common.sh@970 -- # wait 2997450 00:49:49.471 17:03:09 keyring_file -- keyring/file.sh@21 -- # killprocess 2995875 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2995875 ']' 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2995875 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@951 -- # uname 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2995875 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2995875' 00:49:49.471 killing process with pid 2995875 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@965 -- # kill 2995875 00:49:49.471 [2024-07-22 17:03:09.030948] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:49:49.471 17:03:09 keyring_file -- common/autotest_common.sh@970 -- # wait 2995875 00:49:50.037 00:49:50.037 real 0m14.242s 00:49:50.037 user 0m35.339s 00:49:50.037 sys 0m3.451s 00:49:50.037 17:03:09 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:50.037 17:03:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:50.037 ************************************ 00:49:50.037 END TEST keyring_file 00:49:50.037 ************************************ 00:49:50.037 17:03:09 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:49:50.037 17:03:09 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:49:50.037 17:03:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:49:50.037 17:03:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:50.037 17:03:09 -- common/autotest_common.sh@10 -- # set +x 00:49:50.037 ************************************ 00:49:50.037 START TEST keyring_linux 00:49:50.037 ************************************ 00:49:50.037 17:03:09 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:49:50.037 * Looking for test storage... 00:49:50.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:49:50.037 17:03:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:49:50.037 17:03:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:50.037 17:03:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:49:50.037 17:03:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:50.038 17:03:09 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:50.038 17:03:09 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:50.038 17:03:09 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:50.038 17:03:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:50.038 17:03:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:50.038 17:03:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:50.038 17:03:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:49:50.038 17:03:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:49:50.038 /tmp/:spdk-test:key0 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:49:50.038 17:03:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:49:50.038 17:03:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:49:50.038 /tmp/:spdk-test:key1 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2998361 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:49:50.038 17:03:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2998361 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2998361 ']' 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:50.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:50.038 17:03:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:50.038 [2024-07-22 17:03:09.669655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:50.038 [2024-07-22 17:03:09.669735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998361 ] 00:49:50.297 EAL: No free 2048 kB hugepages reported on node 1 00:49:50.297 [2024-07-22 17:03:09.740739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:50.297 [2024-07-22 17:03:09.824500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:50.555 [2024-07-22 17:03:10.067656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:50.555 null0 00:49:50.555 [2024-07-22 17:03:10.099737] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:50.555 [2024-07-22 17:03:10.100293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:49:50.555 27028024 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:49:50.555 719393634 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2998449 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:49:50.555 17:03:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2998449 /var/tmp/bperf.sock 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2998449 ']' 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:50.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:50.555 17:03:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:50.555 [2024-07-22 17:03:10.163524] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:50.555 [2024-07-22 17:03:10.163586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998449 ] 00:49:50.555 EAL: No free 2048 kB hugepages reported on node 1 00:49:50.813 [2024-07-22 17:03:10.234051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:50.813 [2024-07-22 17:03:10.324624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:50.813 17:03:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:50.813 17:03:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:49:50.813 17:03:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:49:50.813 17:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:49:51.072 17:03:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:49:51.072 17:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:51.330 17:03:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:49:51.330 17:03:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:49:51.587 [2024-07-22 17:03:11.173987] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:51.846 nvme0n1 00:49:51.846 17:03:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:49:51.846 17:03:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:49:51.846 17:03:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:49:51.846 17:03:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:49:51.846 17:03:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:49:51.846 17:03:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:49:52.103 17:03:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:52.103 17:03:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:52.103 17:03:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@25 -- # sn=27028024 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 27028024 == \2\7\0\2\8\0\2\4 ]] 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 27028024 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:49:52.103 17:03:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:52.361 Running I/O for 1 seconds... 00:49:53.294 00:49:53.294 Latency(us) 00:49:53.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:53.294 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:49:53.294 nvme0n1 : 1.01 6404.59 25.02 0.00 0.00 19851.94 11796.48 33787.45 00:49:53.294 =================================================================================================================== 00:49:53.294 Total : 6404.59 25.02 0.00 0.00 19851.94 11796.48 33787.45 00:49:53.294 0 00:49:53.294 17:03:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:53.294 17:03:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:53.553 17:03:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:49:53.553 17:03:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:49:53.553 17:03:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:49:53.553 17:03:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:49:53.553 17:03:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:49:53.553 17:03:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:53.812 17:03:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:49:53.812 17:03:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:49:53.812 17:03:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:49:53.812 17:03:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:53.812 17:03:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:53.812 17:03:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:54.070 [2024-07-22 17:03:13.635776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:49:54.070 [2024-07-22 17:03:13.636290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d270 (107): Transport endpoint is not connected 00:49:54.070 [2024-07-22 17:03:13.637280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d270 (9): Bad file descriptor 00:49:54.070 [2024-07-22 17:03:13.638279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:54.070 [2024-07-22 17:03:13.638300] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:49:54.070 [2024-07-22 17:03:13.638331] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:54.070 request: 00:49:54.070 { 00:49:54.070 "name": "nvme0", 00:49:54.070 "trtype": "tcp", 00:49:54.070 "traddr": "127.0.0.1", 00:49:54.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:54.070 "adrfam": "ipv4", 00:49:54.070 "trsvcid": "4420", 00:49:54.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:54.070 "psk": ":spdk-test:key1", 00:49:54.070 "method": "bdev_nvme_attach_controller", 00:49:54.070 "req_id": 1 00:49:54.070 } 00:49:54.070 Got JSON-RPC error response 00:49:54.070 response: 00:49:54.070 { 00:49:54.070 "code": -5, 00:49:54.070 "message": "Input/output error" 00:49:54.070 } 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@33 -- # sn=27028024 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 27028024 00:49:54.070 1 links removed 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@33 -- # sn=719393634 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 719393634 00:49:54.070 1 links removed 00:49:54.070 17:03:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2998449 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2998449 ']' 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2998449 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2998449 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2998449' 00:49:54.070 killing process with pid 2998449 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@965 -- # kill 2998449 00:49:54.070 Received shutdown signal, test time was about 1.000000 seconds 00:49:54.070 00:49:54.070 Latency(us) 00:49:54.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:54.070 =================================================================================================================== 00:49:54.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:54.070 17:03:13 keyring_linux -- common/autotest_common.sh@970 -- # wait 2998449 00:49:54.328 17:03:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2998361 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2998361 ']' 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2998361 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2998361 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2998361' 00:49:54.328 killing process with pid 2998361 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@965 -- # kill 2998361 00:49:54.328 17:03:13 keyring_linux -- common/autotest_common.sh@970 -- # wait 2998361 00:49:54.894 00:49:54.894 real 0m4.870s 00:49:54.894 user 0m9.156s 00:49:54.894 sys 0m1.651s 00:49:54.894 17:03:14 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:54.894 17:03:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:54.894 ************************************ 00:49:54.894 END TEST keyring_linux 00:49:54.894 ************************************ 00:49:54.894 17:03:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:49:54.894 17:03:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:49:54.894 17:03:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:49:54.894 17:03:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:49:54.894 17:03:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:49:54.894 17:03:14 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:49:54.894 17:03:14 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:49:54.894 17:03:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:49:54.894 17:03:14 -- common/autotest_common.sh@10 -- # set +x 00:49:54.894 17:03:14 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:49:54.894 17:03:14 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:49:54.894 17:03:14 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:49:54.894 17:03:14 -- common/autotest_common.sh@10 -- # set +x 00:49:56.794 INFO: APP EXITING 00:49:56.794 INFO: killing all VMs 00:49:56.794 INFO: killing vhost app 00:49:56.794 WARN: no vhost pid file found 00:49:56.794 INFO: EXIT DONE 00:49:58.169 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:49:58.169 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:49:58.169 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:49:58.169 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:49:58.169 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:49:58.169 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:49:58.169 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:49:58.169 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:49:58.169 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:49:58.169 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:49:58.169 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:49:58.169 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:49:58.169 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:49:58.169 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:49:58.169 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:49:58.169 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:49:58.169 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:49:59.543 Cleaning 00:49:59.543 Removing: /var/run/dpdk/spdk0/config 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:49:59.543 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:49:59.543 Removing: /var/run/dpdk/spdk0/hugepage_info 00:49:59.543 Removing: /var/run/dpdk/spdk1/config 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:49:59.543 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:49:59.543 Removing: /var/run/dpdk/spdk1/hugepage_info 00:49:59.543 Removing: /var/run/dpdk/spdk1/mp_socket 00:49:59.543 Removing: /var/run/dpdk/spdk2/config 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:49:59.544 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:49:59.544 Removing: /var/run/dpdk/spdk2/hugepage_info 00:49:59.544 Removing: /var/run/dpdk/spdk3/config 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:49:59.544 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:49:59.544 Removing: /var/run/dpdk/spdk3/hugepage_info 00:49:59.544 Removing: /var/run/dpdk/spdk4/config 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:49:59.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:49:59.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:49:59.544 Removing: /dev/shm/bdev_svc_trace.1 00:49:59.544 Removing: /dev/shm/nvmf_trace.0 00:49:59.544 Removing: /dev/shm/spdk_tgt_trace.pid2655335 00:49:59.544 Removing: /var/run/dpdk/spdk0 00:49:59.544 Removing: /var/run/dpdk/spdk1 00:49:59.544 Removing: /var/run/dpdk/spdk2 00:49:59.544 Removing: /var/run/dpdk/spdk3 00:49:59.544 Removing: /var/run/dpdk/spdk4 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2653529 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2654390 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2655335 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2655769 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2656456 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2656578 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2657314 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2657319 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2657563 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2658906 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2659983 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2660286 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2660479 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2660679 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2660867 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2661028 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2661199 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2661486 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2661940 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2664285 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2664458 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2664621 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2664745 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665054 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665179 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665483 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665555 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665783 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665797 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2665983 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2666086 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2666451 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2666608 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2666817 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2666979 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2667116 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2667186 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2667457 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2667616 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2667767 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668050 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668202 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668361 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668516 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668789 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2668947 00:49:59.544 Removing: /var/run/dpdk/spdk_pid2669100 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2669377 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2669535 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2669692 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2669866 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2670122 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2670280 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2670434 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2670716 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2670871 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2671032 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2671219 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2671423 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2673903 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2730300 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2733205 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2740878 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2744506 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2747148 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2747670 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2755479 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2755481 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2756131 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2756669 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2757333 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2757728 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2757731 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2757989 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2758010 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2758097 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2758674 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2759328 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2759991 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2760387 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2760403 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2760548 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2761442 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2762251 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2767901 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2768165 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2770979 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2775647 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2777746 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2784708 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2790599 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2791888 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2792548 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2803674 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2806134 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2831716 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2834900 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2836088 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2837893 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2838028 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2838164 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2838277 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2838614 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2839934 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2840650 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2840966 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2842571 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2842966 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2843434 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2846242 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2849911 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2853428 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2878345 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2881002 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2885025 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2886093 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2887195 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2890028 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2892670 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2897471 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2897579 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2900659 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2900998 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2901149 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2901418 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2901423 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2903518 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2904691 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2905868 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2907040 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2908218 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2909468 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2913752 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2914080 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2915746 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2916619 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2921407 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2923374 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2927075 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2931307 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2938062 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2942909 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2942912 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2956616 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2957102 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2957542 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2957955 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2958533 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2958960 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2959460 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2959877 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2962745 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2962930 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2967624 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2967812 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2969527 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2975002 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2975007 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2978409 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2979813 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2981207 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2982031 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2983354 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2984228 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2990092 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2990381 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2990776 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2992547 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2992947 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2993223 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2995875 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2995883 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2997450 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2998361 00:49:59.803 Removing: /var/run/dpdk/spdk_pid2998449 00:49:59.803 Clean 00:50:00.062 17:03:19 -- common/autotest_common.sh@1447 -- # return 0 00:50:00.062 17:03:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:50:00.062 17:03:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:00.062 17:03:19 -- common/autotest_common.sh@10 -- # set +x 00:50:00.062 17:03:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:50:00.062 17:03:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:00.062 17:03:19 -- common/autotest_common.sh@10 -- # set +x 00:50:00.062 17:03:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:50:00.062 17:03:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:50:00.062 17:03:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:50:00.062 17:03:19 -- spdk/autotest.sh@391 -- # hash lcov 00:50:00.062 17:03:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:50:00.062 17:03:19 -- spdk/autotest.sh@393 -- # hostname 00:50:00.062 17:03:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:50:00.320 geninfo: WARNING: invalid characters removed from testname! 00:50:32.433 17:03:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:32.433 17:03:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:34.961 17:03:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:37.490 17:03:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:40.774 17:03:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:43.302 17:04:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:46.590 17:04:05 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:50:46.590 17:04:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:46.590 17:04:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:50:46.590 17:04:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:46.590 17:04:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:46.590 17:04:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:46.590 17:04:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:46.590 17:04:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:46.590 17:04:05 -- paths/export.sh@5 -- $ export PATH 00:50:46.590 17:04:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:46.590 17:04:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:50:46.590 17:04:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:50:46.590 17:04:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721660645.XXXXXX 00:50:46.590 17:04:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721660645.mp9Atx 00:50:46.590 17:04:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:50:46.590 17:04:05 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:50:46.590 17:04:05 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:50:46.590 17:04:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:50:46.590 17:04:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:50:46.590 17:04:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:50:46.590 17:04:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:50:46.590 17:04:05 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:50:46.590 17:04:05 -- common/autotest_common.sh@10 -- $ set +x 00:50:46.590 17:04:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:50:46.590 17:04:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:50:46.590 17:04:05 -- pm/common@17 -- $ local monitor 00:50:46.590 17:04:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:46.590 17:04:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:46.590 17:04:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:46.590 17:04:05 -- pm/common@21 -- $ date +%s 00:50:46.590 17:04:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:46.590 17:04:05 -- pm/common@21 -- $ date +%s 00:50:46.590 17:04:05 -- pm/common@25 -- $ sleep 1 00:50:46.590 17:04:05 -- pm/common@21 -- $ date +%s 00:50:46.590 17:04:05 -- pm/common@21 -- $ date +%s 00:50:46.590 17:04:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721660645 00:50:46.590 17:04:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721660645 00:50:46.590 17:04:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721660645 00:50:46.590 17:04:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721660645 00:50:46.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721660645_collect-vmstat.pm.log 00:50:46.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721660645_collect-cpu-load.pm.log 00:50:46.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721660645_collect-cpu-temp.pm.log 00:50:46.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721660645_collect-bmc-pm.bmc.pm.log 00:50:47.157 17:04:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:50:47.157 17:04:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:50:47.157 17:04:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:47.157 17:04:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:50:47.157 17:04:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:50:47.157 17:04:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:50:47.157 17:04:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:50:47.157 17:04:06 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:50:47.157 17:04:06 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:50:47.157 17:04:06 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:50:47.157 17:04:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:50:47.158 17:04:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:50:47.158 17:04:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:50:47.158 17:04:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:50:47.158 17:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:47.158 17:04:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:50:47.158 17:04:06 -- pm/common@44 -- $ pid=3010126 00:50:47.158 17:04:06 -- pm/common@50 -- $ kill -TERM 3010126 00:50:47.158 17:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:47.158 17:04:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:50:47.158 17:04:06 -- pm/common@44 -- $ pid=3010128 00:50:47.158 17:04:06 -- pm/common@50 -- $ kill -TERM 3010128 00:50:47.158 17:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:47.158 17:04:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:50:47.158 17:04:06 -- pm/common@44 -- $ pid=3010129 00:50:47.158 17:04:06 -- pm/common@50 -- $ kill -TERM 3010129 00:50:47.158 17:04:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:47.158 17:04:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:50:47.158 17:04:06 -- pm/common@44 -- $ pid=3010159 00:50:47.158 17:04:06 -- pm/common@50 -- $ sudo -E kill -TERM 3010159 00:50:47.158 + [[ -n 2546510 ]] 00:50:47.158 + sudo kill 2546510 00:50:47.167 [Pipeline] } 00:50:47.184 [Pipeline] // stage 00:50:47.190 [Pipeline] } 00:50:47.207 [Pipeline] // timeout 00:50:47.213 [Pipeline] } 00:50:47.230 [Pipeline] // catchError 00:50:47.236 [Pipeline] } 00:50:47.253 [Pipeline] // wrap 00:50:47.259 [Pipeline] } 00:50:47.275 [Pipeline] // catchError 00:50:47.284 [Pipeline] stage 00:50:47.287 [Pipeline] { (Epilogue) 00:50:47.301 [Pipeline] catchError 00:50:47.303 [Pipeline] { 00:50:47.317 [Pipeline] echo 00:50:47.319 Cleanup processes 00:50:47.324 [Pipeline] sh 00:50:47.622 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:47.622 3010279 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:50:47.622 3010390 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:47.635 [Pipeline] sh 00:50:47.913 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:47.913 ++ awk '{print $1}' 00:50:47.913 ++ grep -v 'sudo pgrep' 00:50:47.913 + sudo kill -9 3010279 00:50:47.925 [Pipeline] sh 00:50:48.292 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:58.268 [Pipeline] sh 00:50:58.599 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:58.599 Artifacts sizes are good 00:50:58.612 [Pipeline] archiveArtifacts 00:50:58.618 Archiving artifacts 00:50:58.880 [Pipeline] sh 00:50:59.162 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:50:59.177 [Pipeline] cleanWs 00:50:59.186 [WS-CLEANUP] Deleting project workspace... 00:50:59.186 [WS-CLEANUP] Deferred wipeout is used... 00:50:59.193 [WS-CLEANUP] done 00:50:59.195 [Pipeline] } 00:50:59.215 [Pipeline] // catchError 00:50:59.228 [Pipeline] sh 00:50:59.506 + logger -p user.info -t JENKINS-CI 00:50:59.514 [Pipeline] } 00:50:59.531 [Pipeline] // stage 00:50:59.537 [Pipeline] } 00:50:59.554 [Pipeline] // node 00:50:59.558 [Pipeline] End of Pipeline 00:50:59.592 Finished: SUCCESS